linux-raid.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* Installing Linux directly onto RAID6 Array...........
@ 2015-05-12 10:08 Another Sillyname
  2015-05-12 10:20 ` Rudy Zijlstra
                   ` (2 more replies)
  0 siblings, 3 replies; 34+ messages in thread
From: Another Sillyname @ 2015-05-12 10:08 UTC (permalink / raw)
  To: linux-raid

I've tried to do some research on this but the information out there
seems a bit contradictory (mainly because some is so old).

I want to install Fedora directly onto a RAID array (no separate boot disk).

My plan is to 'pre configure' the 6 drives as a clean RAID6 array,
effectively sd[a-f] without partitions and then attempt to install
Fedora 21, from sources it looks like Grub2 should recognise the array
and then allow the Kernel to boot thereby 'enabling' the array to
become visible and active.

However I have not been able to find an actual example of someone
trying this......thoughts?

The reason to do this is I'm intending to use a Mini ITX board with 6
sata ports and want to use 8TB drives in Raid6 to give me a very high
density data resilient small form factor storage box.

Ideas/Suggestions?

Thanks

Tony

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: Installing Linux directly onto RAID6 Array...........
  2015-05-12 10:08 Installing Linux directly onto RAID6 Array Another Sillyname
@ 2015-05-12 10:20 ` Rudy Zijlstra
  2015-05-12 12:20   ` Phil Turmel
  2015-05-12 13:27 ` Wilson, Jonathan
  2015-05-13  0:02 ` Adam Goryachev
  2 siblings, 1 reply; 34+ messages in thread
From: Rudy Zijlstra @ 2015-05-12 10:20 UTC (permalink / raw)
  To: Another Sillyname, linux-raid

Hi

Another Sillyname schreef op 12-05-15 om 12:08:
> I've tried to do some research on this but the information out there
> seems a bit contradictory (mainly because some is so old).
>
> I want to install Fedora directly onto a RAID array (no separate boot disk).


Doing something like that... and using network boot to solve those issues

Last i tried grub2 could not handle a raid6 directly (could be changed now)

Cheers


Rudy

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: Installing Linux directly onto RAID6 Array...........
  2015-05-12 10:20 ` Rudy Zijlstra
@ 2015-05-12 12:20   ` Phil Turmel
  2015-05-12 12:31     ` Roman Mamedov
  0 siblings, 1 reply; 34+ messages in thread
From: Phil Turmel @ 2015-05-12 12:20 UTC (permalink / raw)
  To: Rudy Zijlstra, Another Sillyname, linux-raid

On 05/12/2015 06:20 AM, Rudy Zijlstra wrote:
> Hi
> 
> Another Sillyname schreef op 12-05-15 om 12:08:
>> I've tried to do some research on this but the information out there
>> seems a bit contradictory (mainly because some is so old).
>>
>> I want to install Fedora directly onto a RAID array (no separate boot
>> disk).
> 
> 
> Doing something like that... and using network boot to solve those issues
> 
> Last i tried grub2 could not handle a raid6 directly (could be changed now)

You could also boot from a USB device, possibly one that memory caches
its root filesystem (like System Rescue CD).

Phil


^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: Installing Linux directly onto RAID6 Array...........
  2015-05-12 12:20   ` Phil Turmel
@ 2015-05-12 12:31     ` Roman Mamedov
       [not found]       ` <CAOS+5GHhUoYxTTYOWU7cdN6GSdffSMGrhWHU5ZtWEjc4jEm3eg@mail.gmail.com>
  0 siblings, 1 reply; 34+ messages in thread
From: Roman Mamedov @ 2015-05-12 12:31 UTC (permalink / raw)
  To: Phil Turmel; +Cc: Rudy Zijlstra, Another Sillyname, linux-raid

[-- Attachment #1: Type: text/plain, Size: 551 bytes --]

On Tue, 12 May 2015 08:20:42 -0400
Phil Turmel <philip@turmel.org> wrote:

> > Doing something like that... and using network boot to solve those issues
> > 
> > Last i tried grub2 could not handle a raid6 directly (could be changed now)
> 
> You could also boot from a USB device, possibly one that memory caches
> its root filesystem (like System Rescue CD).

Just place GRUB and the /boot partition with kernel and initrd on a USB stick,
the rest (root FS) can be on RAID6. That's what I do on one machine.

-- 
With respect,
Roman

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 198 bytes --]

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Fwd: Installing Linux directly onto RAID6 Array...........
       [not found]       ` <CAOS+5GHhUoYxTTYOWU7cdN6GSdffSMGrhWHU5ZtWEjc4jEm3eg@mail.gmail.com>
@ 2015-05-12 13:12         ` Another Sillyname
  2015-05-12 13:42           ` Roman Mamedov
  0 siblings, 1 reply; 34+ messages in thread
From: Another Sillyname @ 2015-05-12 13:12 UTC (permalink / raw)
  To: linux-raid

---------- Forwarded message ----------
From: Another Sillyname <anothersname@googlemail.com>
Date: 12 May 2015 at 14:11
Subject: Re: Installing Linux directly onto RAID6 Array...........
To: Roman Mamedov <rm@romanrm.net>


Guys

Don't want to use USB as the machine may sometimes be in a non secure
environ, want to boot direct from Raid.

Reading more think there may be a way....

https://help.ubuntu.com/community/Grub2/Installing

there's an interesting idea in there of effectively installing the
bootloader onto every drive in the array......

On 12 May 2015 at 13:31, Roman Mamedov <rm@romanrm.net> wrote:
> On Tue, 12 May 2015 08:20:42 -0400
> Phil Turmel <philip@turmel.org> wrote:
>
>> > Doing something like that... and using network boot to solve those issues
>> >
>> > Last i tried grub2 could not handle a raid6 directly (could be changed now)
>>
>> You could also boot from a USB device, possibly one that memory caches
>> its root filesystem (like System Rescue CD).
>
> Just place GRUB and the /boot partition with kernel and initrd on a USB stick,
> the rest (root FS) can be on RAID6. That's what I do on one machine.
>
> --
> With respect,
> Roman

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: Installing Linux directly onto RAID6 Array...........
  2015-05-12 10:08 Installing Linux directly onto RAID6 Array Another Sillyname
  2015-05-12 10:20 ` Rudy Zijlstra
@ 2015-05-12 13:27 ` Wilson, Jonathan
  2015-05-12 14:05   ` Another Sillyname
  2015-05-13  0:02 ` Adam Goryachev
  2 siblings, 1 reply; 34+ messages in thread
From: Wilson, Jonathan @ 2015-05-12 13:27 UTC (permalink / raw)
  To: Another Sillyname; +Cc: linux-raid

On Tue, 2015-05-12 at 11:08 +0100, Another Sillyname wrote:
> I've tried to do some research on this but the information out there
> seems a bit contradictory (mainly because some is so old).
> 
> I want to install Fedora directly onto a RAID array (no separate boot disk).
> 
> My plan is to 'pre configure' the 6 drives as a clean RAID6 array,
> effectively sd[a-f] without partitions and then attempt to install
> Fedora 21, from sources it looks like Grub2 should recognise the array
> and then allow the Kernel to boot thereby 'enabling' the array to
> become visible and active.
> 
> However I have not been able to find an actual example of someone
> trying this......thoughts?
> 
> The reason to do this is I'm intending to use a Mini ITX board with 6
> sata ports and want to use 8TB drives in Raid6 to give me a very high
> density data resilient small form factor storage box.

Grub2 can handle booting into raid6, but some while ago the support was
sketchy if the array was degraded; this may have improved.

Your problem would be that it would require a biosboot (GPT, type:EF02,
size:1 MiB) partition to hold part of the loader as it will not fit
entirely into the "mbr."

A second problem might be that while drives larger than 2 GB can be used
on (most?) older boards they might not be able to be accessed/read
correctly/bootable by older non-EFI bios's. My old MB was quite happy to
boot from a 1 TB drive and linux could see and use my 3TB drives but the
bios only saw the 3TB drives as 700MB (approx, I recall)

If you are using an EFI system in EFI mode, you will need an EFI
partition(s) somewhere. On my new system I have all 5 of my 3TB drives
contain an EFI-dos partition of about 200-500MB and the the rest as one
large partition for the raid6 containing everything else (/, /home,
etc.). The only pain in the neck is remembering to copy everything from
the "live EFI" (default loaded as "/boot") to the backup EFI's when ever
I change/update stuff in it.
> 
> Ideas/Suggestions?
> 
> Thanks
> 
> Tony
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> 



^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: Installing Linux directly onto RAID6 Array...........
  2015-05-12 13:12         ` Fwd: " Another Sillyname
@ 2015-05-12 13:42           ` Roman Mamedov
  0 siblings, 0 replies; 34+ messages in thread
From: Roman Mamedov @ 2015-05-12 13:42 UTC (permalink / raw)
  To: Another Sillyname; +Cc: linux-raid

[-- Attachment #1: Type: text/plain, Size: 530 bytes --]

On Tue, 12 May 2015 14:12:10 +0100
Another Sillyname <anothersname@googlemail.com> wrote:

> there's an interesting idea in there of effectively installing the
> bootloader onto every drive in the array......

Oh, sure you can install GRUB2 to every drive and also create a 6-member RAID1
with metadata version 0.90 from first partition of each drive (256 MB is
enough) for /boot. That's another way of "booting from the same set drives as
the main storage". I do that on another machine :)

-- 
With respect,
Roman

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 198 bytes --]

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: Installing Linux directly onto RAID6 Array...........
  2015-05-12 13:27 ` Wilson, Jonathan
@ 2015-05-12 14:05   ` Another Sillyname
  0 siblings, 0 replies; 34+ messages in thread
From: Another Sillyname @ 2015-05-12 14:05 UTC (permalink / raw)
  To: Wilson, Jonathan; +Cc: linux-raid

It's a new z97 board so EFI rules apply.

I have some spare 3TB drives sitting around that I may try to practice
on.......see what will and won't work.

Still open to all suggestions and especially from anyone who's
actually done this.

On 12 May 2015 at 14:27, Wilson, Jonathan <piercing_male@hotmail.com> wrote:
> On Tue, 2015-05-12 at 11:08 +0100, Another Sillyname wrote:
>> I've tried to do some research on this but the information out there
>> seems a bit contradictory (mainly because some is so old).
>>
>> I want to install Fedora directly onto a RAID array (no separate boot disk).
>>
>> My plan is to 'pre configure' the 6 drives as a clean RAID6 array,
>> effectively sd[a-f] without partitions and then attempt to install
>> Fedora 21, from sources it looks like Grub2 should recognise the array
>> and then allow the Kernel to boot thereby 'enabling' the array to
>> become visible and active.
>>
>> However I have not been able to find an actual example of someone
>> trying this......thoughts?
>>
>> The reason to do this is I'm intending to use a Mini ITX board with 6
>> sata ports and want to use 8TB drives in Raid6 to give me a very high
>> density data resilient small form factor storage box.
>
> Grub2 can handle booting into raid6, but some while ago the support was
> sketchy if the array was degraded; this may have improved.
>
> Your problem would be that it would require a biosboot (GPT, type:EF02,
> size:1 MiB) partition to hold part of the loader as it will not fit
> entirely into the "mbr."
>
> A second problem might be that while drives larger than 2 GB can be used
> on (most?) older boards they might not be able to be accessed/read
> correctly/bootable by older non-EFI bios's. My old MB was quite happy to
> boot from a 1 TB drive and linux could see and use my 3TB drives but the
> bios only saw the 3TB drives as 700MB (approx, I recall)
>
> If you are using an EFI system in EFI mode, you will need an EFI
> partition(s) somewhere. On my new system I have all 5 of my 3TB drives
> contain an EFI-dos partition of about 200-500MB and the the rest as one
> large partition for the raid6 containing everything else (/, /home,
> etc.). The only pain in the neck is remembering to copy everything from
> the "live EFI" (default loaded as "/boot") to the backup EFI's when ever
> I change/update stuff in it.
>>
>> Ideas/Suggestions?
>>
>> Thanks
>>
>> Tony
>> --
>> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
>> the body of a message to majordomo@vger.kernel.org
>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>>
>
>

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: Installing Linux directly onto RAID6 Array...........
  2015-05-12 10:08 Installing Linux directly onto RAID6 Array Another Sillyname
  2015-05-12 10:20 ` Rudy Zijlstra
  2015-05-12 13:27 ` Wilson, Jonathan
@ 2015-05-13  0:02 ` Adam Goryachev
       [not found]   ` <CAOS+5GEP6+7OAHkqQjeyGHAB5u-_-Vq2JWGpcOemYHdCjmR5Lg@mail.gmail.com>
  2 siblings, 1 reply; 34+ messages in thread
From: Adam Goryachev @ 2015-05-13  0:02 UTC (permalink / raw)
  To: Another Sillyname, linux-raid

On 12/05/15 20:08, Another Sillyname wrote:
> I've tried to do some research on this but the information out there
> seems a bit contradictory (mainly because some is so old).
>
> I want to install Fedora directly onto a RAID array (no separate boot disk).
>
> My plan is to 'pre configure' the 6 drives as a clean RAID6 array,
> effectively sd[a-f] without partitions and then attempt to install
> Fedora 21, from sources it looks like Grub2 should recognise the array
> and then allow the Kernel to boot thereby 'enabling' the array to
> become visible and active.
>
> However I have not been able to find an actual example of someone
> trying this......thoughts?
>
> The reason to do this is I'm intending to use a Mini ITX board with 6
> sata ports and want to use 8TB drives in Raid6 to give me a very high
> density data resilient small form factor storage box.
>
> Ideas/Suggestions?

Create a small RAID1 partition at the beginning of every disk, use this 
for /boot and install grub. Use the rest of each disk as your RAID6 array.

This means as long as the bios can find any one of your disks, then the 
system will boot. Personally, I'd be inclined to put the complete OS on 
the RAID1, and then only use the RAID6 for the "data" mountpoint. This 
would allow the system to come up fully, even without the RAID6 (eg, 4 
out of 6 drives has died), and allow remote debug/diagnostics/etc 
without needing to try and boot from a rescue disk/etc.

This doesn't help you with what you asked, but just an idea/suggestion.

Regards,
Adam

-- 
Adam Goryachev Website Managers www.websitemanagers.com.au

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Fwd: Installing Linux directly onto RAID6 Array...........
       [not found]   ` <CAOS+5GEP6+7OAHkqQjeyGHAB5u-_-Vq2JWGpcOemYHdCjmR5Lg@mail.gmail.com>
@ 2015-05-24  1:18     ` Another Sillyname
  2015-05-24  8:36       ` Mikael Abrahamsson
  0 siblings, 1 reply; 34+ messages in thread
From: Another Sillyname @ 2015-05-24  1:18 UTC (permalink / raw)
  To: linux-raid

OK.

I've now read a LOT of stuff on this and have a much better
understanding.....but also probably some misunderstandings as well
(but I've learnt a lot so that's good).

I have now realised that I can actually store the RAID configuration
within the mdadm.conf file and therefore can incorporate that into the
initramfs configuration, however I have a few areas of uncertainty
that perhaps some people on the list could clarify.

1.  Fedora currently has a bug whereby efibootmgr during the install
won't play nice with efi systems....

https://fedoraproject.org/wiki/Common_F21_bugs#Cannot_place_bootloader_target_partition_.28e.g._.2Fboot.2Fefi.29_on_a_disk_other_than_the_first_in_custom_partitioning

This causes the install to fail as the bootloader doesn't get installed.

As Fedora installs GRUB2 as standard and based on what I had read
elsewhere I decided to create a bios_boot partition as the first
partition on the drives, during the install GRUB2 sees this and
configures it appropriately thereby bypassing the bug above.

So I now have 5 partitions.

a  -  bios_boot
b  -  efi
c  -  boot
d  -  root
e  -  swap

I'll be adding one more when I'm happy this is working.

f  -  home

2.  On my Asus mobo that I'm using it insists on treating the
installed Live USB as sda.....frankly a real pain in the a** as it
means during the first boot the root partition is mapped incorrectly
to sdb4, not a major problem but still annoying so if anyone knows a
way to 'force' a USB stick to not be the first device I'd love to hear
it.

3.  Using the methods above I have now created a bootable fedora
system, on a single drive in preparation to now RAID the required
partitions.  However my concern comes regarding the mdadm metadata,
simplistically metadata=1.2 apparently writes it's superblock to 4k
after the start of the device, this is exactly where my efi partition
(b above) starts, so my concern is will this superblock overwrite or
mess with my current partition table?

4.  If the next stage works then I think what I'll actually end up
doing is......

scrub what I have now.

create the arrays before running the Fedora Live installer (this
assumes the installer will see /md[x] devices and allow them to be
used to install to). Then incorporate the mdadm.conf data into the
initramfs and regenerate initramfs.

Ideas/Thoughts/Criticisms?

Tony





On 13 May 2015 at 01:02, Adam Goryachev
<mailinglists@websitemanagers.com.au> wrote:
> On 12/05/15 20:08, Another Sillyname wrote:
>>
>> I've tried to do some research on this but the information out there
>> seems a bit contradictory (mainly because some is so old).
>>
>> I want to install Fedora directly onto a RAID array (no separate boot
>> disk).
>>
>> My plan is to 'pre configure' the 6 drives as a clean RAID6 array,
>> effectively sd[a-f] without partitions and then attempt to install
>> Fedora 21, from sources it looks like Grub2 should recognise the array
>> and then allow the Kernel to boot thereby 'enabling' the array to
>> become visible and active.
>>
>> However I have not been able to find an actual example of someone
>> trying this......thoughts?
>>
>> The reason to do this is I'm intending to use a Mini ITX board with 6
>> sata ports and want to use 8TB drives in Raid6 to give me a very high
>> density data resilient small form factor storage box.
>>
>> Ideas/Suggestions?
>
>
> Create a small RAID1 partition at the beginning of every disk, use this for
> /boot and install grub. Use the rest of each disk as your RAID6 array.
>
> This means as long as the bios can find any one of your disks, then the
> system will boot. Personally, I'd be inclined to put the complete OS on the
> RAID1, and then only use the RAID6 for the "data" mountpoint. This would
> allow the system to come up fully, even without the RAID6 (eg, 4 out of 6
> drives has died), and allow remote debug/diagnostics/etc without needing to
> try and boot from a rescue disk/etc.
>
> This doesn't help you with what you asked, but just an idea/suggestion.
>
> Regards,
> Adam
>
> --
> Adam Goryachev Website Managers www.websitemanagers.com.au

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: Fwd: Installing Linux directly onto RAID6 Array...........
  2015-05-24  1:18     ` Fwd: " Another Sillyname
@ 2015-05-24  8:36       ` Mikael Abrahamsson
  2015-05-24  9:08         ` Another Sillyname
  2015-05-24 11:12         ` Fwd: " Wols Lists
  0 siblings, 2 replies; 34+ messages in thread
From: Mikael Abrahamsson @ 2015-05-24  8:36 UTC (permalink / raw)
  To: Another Sillyname; +Cc: linux-raid

On Sun, 24 May 2015, Another Sillyname wrote:

> So I now have 5 partitions.
>
> a  -  bios_boot
> b  -  efi
> c  -  boot
> d  -  root
> e  -  swap
>
> I'll be adding one more when I'm happy this is working.
>
> f  -  home
>
> 3.  Using the methods above I have now created a bootable fedora
> system, on a single drive in preparation to now RAID the required
> partitions.  However my concern comes regarding the mdadm metadata,
> simplistically metadata=1.2 apparently writes it's superblock to 4k
> after the start of the device, this is exactly where my efi partition
> (b above) starts, so my concern is will this superblock overwrite or
> mess with my current partition table?
>
> 4.  If the next stage works then I think what I'll actually end up
> doing is......
>
> scrub what I have now.
>
> create the arrays before running the Fedora Live installer (this
> assumes the installer will see /md[x] devices and allow them to be
> used to install to). Then incorporate the mdadm.conf data into the
> initramfs and regenerate initramfs.
>
> Ideas/Thoughts/Criticisms?

You don't want to run MD on the entire drive in this case, you most likely 
want to create multiple RAID1 and RAID6 mirrors. RAID1 your boot, root and 
swap, then run RAID6 on your home partition. Use use superblock type that 
creates the superblock at the end for the RAID1 partitions.

Also, you don't want to refer to "sda" when booting, you want to use 
UUID=<uuid> in fstab, crypttab etc.

-- 
Mikael Abrahamsson    email: swmike@swm.pp.se

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: Fwd: Installing Linux directly onto RAID6 Array...........
  2015-05-24  8:36       ` Mikael Abrahamsson
@ 2015-05-24  9:08         ` Another Sillyname
  2015-05-24  9:46           ` Mikael Abrahamsson
  2015-05-26  8:29           ` NeilBrown
  2015-05-24 11:12         ` Fwd: " Wols Lists
  1 sibling, 2 replies; 34+ messages in thread
From: Another Sillyname @ 2015-05-24  9:08 UTC (permalink / raw)
  To: Mikael Abrahamsson, linux-raid

I suspect you're correct in that I'll end up with the boot partitions
being in RAID1 and the data in RAID6, however I am seriously
considering having the boot in RAID6 as well...if I can integrate the
mdadm.conf into the initramfs properly I can't see a reason not to do
this?

Had a look at the metadata=0.9 option but reading the info on mdadm
metadata I think I'd prefer to have the metadata at the start of the
drive, also it looks like metadata=1.2 has extra functionality that I
may want to use later.

On 24 May 2015 at 09:36, Mikael Abrahamsson <swmike@swm.pp.se> wrote:
> On Sun, 24 May 2015, Another Sillyname wrote:
>
>> So I now have 5 partitions.
>>
>> a  -  bios_boot
>> b  -  efi
>> c  -  boot
>> d  -  root
>> e  -  swap
>>
>> I'll be adding one more when I'm happy this is working.
>>
>> f  -  home
>>
>> 3.  Using the methods above I have now created a bootable fedora
>> system, on a single drive in preparation to now RAID the required
>> partitions.  However my concern comes regarding the mdadm metadata,
>> simplistically metadata=1.2 apparently writes it's superblock to 4k
>> after the start of the device, this is exactly where my efi partition
>> (b above) starts, so my concern is will this superblock overwrite or
>> mess with my current partition table?
>>
>> 4.  If the next stage works then I think what I'll actually end up
>> doing is......
>>
>> scrub what I have now.
>>
>> create the arrays before running the Fedora Live installer (this
>> assumes the installer will see /md[x] devices and allow them to be
>> used to install to). Then incorporate the mdadm.conf data into the
>> initramfs and regenerate initramfs.
>>
>> Ideas/Thoughts/Criticisms?
>
>
> You don't want to run MD on the entire drive in this case, you most likely
> want to create multiple RAID1 and RAID6 mirrors. RAID1 your boot, root and
> swap, then run RAID6 on your home partition. Use use superblock type that
> creates the superblock at the end for the RAID1 partitions.
>
> Also, you don't want to refer to "sda" when booting, you want to use
> UUID=<uuid> in fstab, crypttab etc.
>
> --
> Mikael Abrahamsson    email: swmike@swm.pp.se

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: Fwd: Installing Linux directly onto RAID6 Array...........
  2015-05-24  9:08         ` Another Sillyname
@ 2015-05-24  9:46           ` Mikael Abrahamsson
  2015-05-24 10:07             ` Another Sillyname
  2015-05-26  8:29           ` NeilBrown
  1 sibling, 1 reply; 34+ messages in thread
From: Mikael Abrahamsson @ 2015-05-24  9:46 UTC (permalink / raw)
  To: Another Sillyname; +Cc: linux-raid

On Sun, 24 May 2015, Another Sillyname wrote:

> I suspect you're correct in that I'll end up with the boot partitions
> being in RAID1 and the data in RAID6, however I am seriously
> considering having the boot in RAID6 as well...if I can integrate the
> mdadm.conf into the initramfs properly I can't see a reason not to do
> this?

The initramfs is usually located on the boot partition. A raid1 partition 
with a filesystem can be read by the bootloader even if the bootloader 
doesn't understand to start the raid first. This is not possible with 
raid6.

So /boot is usually raid1.

> Had a look at the metadata=0.9 option but reading the info on mdadm
> metadata I think I'd prefer to have the metadata at the start of the
> drive, also it looks like metadata=1.2 has extra functionality that I
> may want to use later.

Look at v1.0 superblock, it has the same functionality but stores the 
superblock at the end.

-- 
Mikael Abrahamsson    email: swmike@swm.pp.se

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: Fwd: Installing Linux directly onto RAID6 Array...........
  2015-05-24  9:46           ` Mikael Abrahamsson
@ 2015-05-24 10:07             ` Another Sillyname
  2015-05-24 10:35               ` Mikael Abrahamsson
  0 siblings, 1 reply; 34+ messages in thread
From: Another Sillyname @ 2015-05-24 10:07 UTC (permalink / raw)
  To: Mikael Abrahamsson; +Cc: linux-raid

But reading this.....

https://raid.wiki.kernel.org/index.php/RAID_superblock_formats#The_version-1_Superblock_Format

it makes a sound argument for the metadata superblock being at the
beginning of the drive.......in a nutshell the kernel can 'construct'
the md device easier if the metadata is at the beginning....as well as
data resilience issues in the case of a crash.



On 24 May 2015 at 10:46, Mikael Abrahamsson <swmike@swm.pp.se> wrote:
> On Sun, 24 May 2015, Another Sillyname wrote:
>
>> I suspect you're correct in that I'll end up with the boot partitions
>> being in RAID1 and the data in RAID6, however I am seriously
>> considering having the boot in RAID6 as well...if I can integrate the
>> mdadm.conf into the initramfs properly I can't see a reason not to do
>> this?
>
>
> The initramfs is usually located on the boot partition. A raid1 partition
> with a filesystem can be read by the bootloader even if the bootloader
> doesn't understand to start the raid first. This is not possible with raid6.
>
> So /boot is usually raid1.
>
>> Had a look at the metadata=0.9 option but reading the info on mdadm
>> metadata I think I'd prefer to have the metadata at the start of the
>> drive, also it looks like metadata=1.2 has extra functionality that I
>> may want to use later.
>
>
> Look at v1.0 superblock, it has the same functionality but stores the
> superblock at the end.
>
>
> --
> Mikael Abrahamsson    email: swmike@swm.pp.se

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: Fwd: Installing Linux directly onto RAID6 Array...........
  2015-05-24 10:07             ` Another Sillyname
@ 2015-05-24 10:35               ` Mikael Abrahamsson
  2015-05-24 10:42                 ` Another Sillyname
  0 siblings, 1 reply; 34+ messages in thread
From: Mikael Abrahamsson @ 2015-05-24 10:35 UTC (permalink / raw)
  To: Another Sillyname; +Cc: linux-raid

On Sun, 24 May 2015, Another Sillyname wrote:

> But reading this.....
>
> https://raid.wiki.kernel.org/index.php/RAID_superblock_formats#The_version-1_Superblock_Format
>
> it makes a sound argument for the metadata superblock being at the
> beginning of the drive.......in a nutshell the kernel can 'construct'
> the md device easier if the metadata is at the beginning....as well as
> data resilience issues in the case of a crash.

I don't see how you have any choice in case of /boot. I am a strong 
supporter of v1.2 superblock (4k into the partition) as can be seen from 
the discussion about this some years back, but in the case of /boot when 
the boot loader needs to access files on /boot before the raid is started, 
I don't see how you have a choice. So stay away from v0.90, but v1.0 has 
the same functionality apart from where the superblock is placed.

-- 
Mikael Abrahamsson    email: swmike@swm.pp.se

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: Fwd: Installing Linux directly onto RAID6 Array...........
  2015-05-24 10:35               ` Mikael Abrahamsson
@ 2015-05-24 10:42                 ` Another Sillyname
  0 siblings, 0 replies; 34+ messages in thread
From: Another Sillyname @ 2015-05-24 10:42 UTC (permalink / raw)
  To: Mikael Abrahamsson, linux-raid

That's the whole point of trying this test......

As I have a working booting machine with currently 8 drives.......all
partitioned as above.

I have just created RAID5 mdadm setups on drives [d-h] on partitions
[2-5], once they've synced I'm going to see if I can install Live USB
Fedora to them and then boot from there.  If it doesn't work I've just
lost a bit of time.



On 24 May 2015 at 11:35, Mikael Abrahamsson <swmike@swm.pp.se> wrote:
> On Sun, 24 May 2015, Another Sillyname wrote:
>
>> But reading this.....
>>
>>
>> https://raid.wiki.kernel.org/index.php/RAID_superblock_formats#The_version-1_Superblock_Format
>>
>> it makes a sound argument for the metadata superblock being at the
>> beginning of the drive.......in a nutshell the kernel can 'construct'
>> the md device easier if the metadata is at the beginning....as well as
>> data resilience issues in the case of a crash.
>
>
> I don't see how you have any choice in case of /boot. I am a strong
> supporter of v1.2 superblock (4k into the partition) as can be seen from the
> discussion about this some years back, but in the case of /boot when the
> boot loader needs to access files on /boot before the raid is started, I
> don't see how you have a choice. So stay away from v0.90, but v1.0 has the
> same functionality apart from where the superblock is placed.
>
>
> --
> Mikael Abrahamsson    email: swmike@swm.pp.se

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: Fwd: Installing Linux directly onto RAID6 Array...........
  2015-05-24  8:36       ` Mikael Abrahamsson
  2015-05-24  9:08         ` Another Sillyname
@ 2015-05-24 11:12         ` Wols Lists
  2015-05-24 11:57           ` Brad Campbell
  1 sibling, 1 reply; 34+ messages in thread
From: Wols Lists @ 2015-05-24 11:12 UTC (permalink / raw)
  To: Mikael Abrahamsson, Another Sillyname; +Cc: linux-raid

On 24/05/15 09:36, Mikael Abrahamsson wrote:
> You don't want to run MD on the entire drive in this case, you most
> likely want to create multiple RAID1 and RAID6 mirrors. RAID1 your boot,
> root and swap, then run RAID6 on your home partition. Use use superblock
> type that creates the superblock at the end for the RAID1 partitions.

Don't bother raid'ing swap at all! If you set the priority equal on all
the swaps, linux will do a raid 0 for you and, frankly, what's the point
of doing raid on your swap partition? You really shouldn't (in normal
usage) be using swap at all.

NB - with swap, I always create one swap partition per drive, and make
it twice the maximum ram the mobo will take. I have still not had
anybody come back to me with any evidence that the swap algorithm
doesn't work better with that amount of swap, and the belief that "twice
ram no longer applies" was PROVEN to be an urban myth with linux 2.4
(yes, that was a long time ago, but as I said no-one has ever given me
any evidence that things have changed since).

I'm a bit unusual, running gentoo I need large chunks of temporary space
so I have a couple of huge tmpfs drives.

Cheers,
Wol

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: Fwd: Installing Linux directly onto RAID6 Array...........
  2015-05-24 11:12         ` Fwd: " Wols Lists
@ 2015-05-24 11:57           ` Brad Campbell
  2015-05-24 12:41             ` Wols Lists
  0 siblings, 1 reply; 34+ messages in thread
From: Brad Campbell @ 2015-05-24 11:57 UTC (permalink / raw)
  To: Wols Lists, Mikael Abrahamsson, Another Sillyname; +Cc: linux-raid

On 24/05/15 19:12, Wols Lists wrote:

> Don't bother raid'ing swap at all! If you set the priority equal on all
> the swaps, linux will do a raid 0 for you and, frankly, what's the point
> of doing raid on your swap partition? You really shouldn't (in normal
> usage) be using swap at all.

When you develop a bad block in one of your non-RAID swap partitions and 
you spend a week trying to figure out why processes randomly die for no 
apparent reason you'll reconsider that attitude.

Brad
-- 
Dolphins are so intelligent that within a few weeks they can
train Americans to stand at the edge of the pool and throw them
fish.

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: Fwd: Installing Linux directly onto RAID6 Array...........
  2015-05-24 11:57           ` Brad Campbell
@ 2015-05-24 12:41             ` Wols Lists
  2015-05-24 13:48               ` Brad Campbell
  2015-05-24 14:06               ` Mikael Abrahamsson
  0 siblings, 2 replies; 34+ messages in thread
From: Wols Lists @ 2015-05-24 12:41 UTC (permalink / raw)
  To: Brad Campbell, Mikael Abrahamsson, Another Sillyname; +Cc: linux-raid

On 24/05/15 12:57, Brad Campbell wrote:
> On 24/05/15 19:12, Wols Lists wrote:
> 
>> Don't bother raid'ing swap at all! If you set the priority equal on all
>> the swaps, linux will do a raid 0 for you and, frankly, what's the point
>> of doing raid on your swap partition? You really shouldn't (in normal
>> usage) be using swap at all.
> 
> When you develop a bad block in one of your non-RAID swap partitions and
> you spend a week trying to figure out why processes randomly die for no
> apparent reason you'll reconsider that attitude.
> 
aiui raid won't help you at all here, will it?

Firstly, if you get write failures, the hard drive should swap the
block, or the write layer should swap it. Nothing to do with raid
whatsoever.

And if you get read errors, well, aiui, raid won't help here either -
especially with mirrored raid, you just get a read failure. Raid does
NOT give you error recovery unless the drive physically fails, and if
it's a bad block it gets fixed at the disk or disk driver level - well
below the raid driver.

Cheers,
Wol

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: Fwd: Installing Linux directly onto RAID6 Array...........
  2015-05-24 12:41             ` Wols Lists
@ 2015-05-24 13:48               ` Brad Campbell
  2015-05-24 14:06               ` Mikael Abrahamsson
  1 sibling, 0 replies; 34+ messages in thread
From: Brad Campbell @ 2015-05-24 13:48 UTC (permalink / raw)
  To: Wols Lists, Mikael Abrahamsson, Another Sillyname; +Cc: linux-raid

On 24/05/15 20:41, Wols Lists wrote:

> aiui raid won't help you at all here, will it?

Of course it will.

> Firstly, if you get write failures, the hard drive should swap the
> block, or the write layer should swap it. Nothing to do with raid
> whatsoever.

Yep..

> And if you get read errors, well, aiui, raid won't help here either -
> especially with mirrored raid, you just get a read failure. Raid does
> NOT give you error recovery unless the drive physically fails, and if
> it's a bad block it gets fixed at the disk or disk driver level - well
> below the raid driver.

Nope. If you get a read failure it will try and pull from another 
mirror, and *that* will succeed. Drives don't return dud* data. They 
either succeed or fail.

* Of course I've heard anecdotes from people about that happening, but 
it's not supposed to happen under *any* circumstances.

-- 
Dolphins are so intelligent that within a few weeks they can
train Americans to stand at the edge of the pool and throw them
fish.

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: Fwd: Installing Linux directly onto RAID6 Array...........
  2015-05-24 12:41             ` Wols Lists
  2015-05-24 13:48               ` Brad Campbell
@ 2015-05-24 14:06               ` Mikael Abrahamsson
  2015-05-24 14:53                 ` Wols Lists
  1 sibling, 1 reply; 34+ messages in thread
From: Mikael Abrahamsson @ 2015-05-24 14:06 UTC (permalink / raw)
  To: Wols Lists; +Cc: Brad Campbell, Another Sillyname, linux-raid

On Sun, 24 May 2015, Wols Lists wrote:

> And if you get read errors, well, aiui, raid won't help here either - 
> especially with mirrored raid, you just get a read failure. Raid does 
> NOT give you error recovery unless the drive physically fails, and if 
> it's a bad block it gets fixed at the disk or disk driver level - well 
> below the raid driver.

You're wrong. In case of a read error from the physical drive on RAID1, 
RAID5 or RAID6 then the information will be re-created from another drive, 
and written to the drive that threw a read error. This is the whole point 
of RAID with parity information.

-- 
Mikael Abrahamsson    email: swmike@swm.pp.se

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: Fwd: Installing Linux directly onto RAID6 Array...........
  2015-05-24 14:06               ` Mikael Abrahamsson
@ 2015-05-24 14:53                 ` Wols Lists
  2015-05-24 14:57                   ` Mikael Abrahamsson
  0 siblings, 1 reply; 34+ messages in thread
From: Wols Lists @ 2015-05-24 14:53 UTC (permalink / raw)
  To: Mikael Abrahamsson; +Cc: Brad Campbell, Another Sillyname, linux-raid

On 24/05/15 15:06, Mikael Abrahamsson wrote:
> On Sun, 24 May 2015, Wols Lists wrote:
> 
>> And if you get read errors, well, aiui, raid won't help here either -
>> especially with mirrored raid, you just get a read failure. Raid does
>> NOT give you error recovery unless the drive physically fails, and if
>> it's a bad block it gets fixed at the disk or disk driver level - well
>> below the raid driver.
> 
> You're wrong. In case of a read error from the physical drive on RAID1,
> RAID5 or RAID6 then the information will be re-created from another
> drive, and written to the drive that threw a read error. This is the
> whole point of RAID with parity information.
> 
Except raid 1 isn't parity ... :-)

Personally, I still don't think "raid"ing swap is worth it, though.
Horses for courses, ram is cheap, and in my circumstances I don't think
I'd gain anything.

Cheers,
Wol


^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: Fwd: Installing Linux directly onto RAID6 Array...........
  2015-05-24 14:53                 ` Wols Lists
@ 2015-05-24 14:57                   ` Mikael Abrahamsson
  2015-05-25 21:03                     ` Another Sillyname
  0 siblings, 1 reply; 34+ messages in thread
From: Mikael Abrahamsson @ 2015-05-24 14:57 UTC (permalink / raw)
  To: Wols Lists; +Cc: Brad Campbell, Another Sillyname, linux-raid

On Sun, 24 May 2015, Wols Lists wrote:

> On 24/05/15 15:06, Mikael Abrahamsson wrote:
>> On Sun, 24 May 2015, Wols Lists wrote:
>>
>>> And if you get read errors, well, aiui, raid won't help here either -
>>> especially with mirrored raid, you just get a read failure. Raid does
>>> NOT give you error recovery unless the drive physically fails, and if
>>> it's a bad block it gets fixed at the disk or disk driver level - well
>>> below the raid driver.
>>
>> You're wrong. In case of a read error from the physical drive on RAID1,
>> RAID5 or RAID6 then the information will be re-created from another
>> drive, and written to the drive that threw a read error. This is the
>> whole point of RAID with parity information.
>>
> Except raid 1 isn't parity ... :-)

RAID1 means every drive will have the same information, it's mirrored 
between the member disks. What do you think RAID1 is?

> Personally, I still don't think "raid"ing swap is worth it, though.
> Horses for courses, ram is cheap, and in my circumstances I don't think
> I'd gain anything.

You're welcome to believe anything you want, but if you're publically 
telling people things that are just not true then you should expect to be 
told so.

You're welcome to tell people to not use SWAP at all, but telling people 
RAID1 has no benefit for SWAP because it won't protect you from read 
erorrs is just wrong.

-- 
Mikael Abrahamsson    email: swmike@swm.pp.se

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: Fwd: Installing Linux directly onto RAID6 Array...........
  2015-05-24 14:57                   ` Mikael Abrahamsson
@ 2015-05-25 21:03                     ` Another Sillyname
  2015-05-25 23:20                       ` Wols Lists
  0 siblings, 1 reply; 34+ messages in thread
From: Another Sillyname @ 2015-05-25 21:03 UTC (permalink / raw)
  To: linux-raid

Well the swap thing was an easy fix/decision.....

I agree that putting swap into an array seems a bit of overkill,
however one of the things that can be a problem is mounting swap files
into /etc/fstab and then the drive with the swap fails......

My workaround, found on the web and re hashed, is to instead create a
4GB swap partition on each drive then using the following script
automount the swaps at boot time using a systemd script.

#!/bin/bash

#  Script for service that autodetects and starts swap partitions

for f in $(fdisk -l | grep "Linux swap" | sort | cut -d' ' -f1 | tr
'\n' ' '); do swapon $f; done

as it only 'finds' swaps on active partitions it prevents boot
problems in the case of a dead drive.

Due to time constraints I've had to build this using bios_boot and
RAID1 /boot/efi and RAID1 /boot partitions for now and the RAID6
partition is currently syncing with a projected finish around 15 hours
from now.  However given what I've learnt I'm convinced that using
initramfs on pre-existing created partitions is the way to
go.....RAID6 for all the arrays, including /boot and /boot/efi.  Once
I've got this migration out of the way and another test box to use I
intend to take this a stage further and make it work.

Thanks for everyones help and ideas, much appreciated.

Tony



On 24 May 2015 at 15:57, Mikael Abrahamsson <swmike@swm.pp.se> wrote:
> On Sun, 24 May 2015, Wols Lists wrote:
>
>> On 24/05/15 15:06, Mikael Abrahamsson wrote:
>>>
>>> On Sun, 24 May 2015, Wols Lists wrote:
>>>
>>>> And if you get read errors, well, aiui, raid won't help here either -
>>>> especially with mirrored raid, you just get a read failure. Raid does
>>>> NOT give you error recovery unless the drive physically fails, and if
>>>> it's a bad block it gets fixed at the disk or disk driver level - well
>>>> below the raid driver.
>>>
>>>
>>> You're wrong. In case of a read error from the physical drive on RAID1,
>>> RAID5 or RAID6 then the information will be re-created from another
>>> drive, and written to the drive that threw a read error. This is the
>>> whole point of RAID with parity information.
>>>
>> Except raid 1 isn't parity ... :-)
>
>
> RAID1 means every drive will have the same information, it's mirrored
> between the member disks. What do you think RAID1 is?
>
>> Personally, I still don't think "raid"ing swap is worth it, though.
>> Horses for courses, ram is cheap, and in my circumstances I don't think
>> I'd gain anything.
>
>
> You're welcome to believe anything you want, but if you're publically
> telling people things that are just not true then you should expect to be
> told so.
>
> You're welcome to tell people to not use SWAP at all, but telling people
> RAID1 has no benefit for SWAP because it won't protect you from read erorrs
> is just wrong.
>
>
> --
> Mikael Abrahamsson    email: swmike@swm.pp.se

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: Fwd: Installing Linux directly onto RAID6 Array...........
  2015-05-25 21:03                     ` Another Sillyname
@ 2015-05-25 23:20                       ` Wols Lists
  2015-05-26  7:08                         ` Another Sillyname
  0 siblings, 1 reply; 34+ messages in thread
From: Wols Lists @ 2015-05-25 23:20 UTC (permalink / raw)
  To: Another Sillyname, linux-raid

On 25/05/15 22:03, Another Sillyname wrote:
> My workaround, found on the web and re hashed, is to instead create a
> 4GB swap partition on each drive then using the following script
> automount the swaps at boot time using a systemd script.
> 
> #!/bin/bash
> 
> #  Script for service that autodetects and starts swap partitions
> 
> for f in $(fdisk -l | grep "Linux swap" | sort | cut -d' ' -f1 | tr
> '\n' ' '); do swapon $f; done
> 
> as it only 'finds' swaps on active partitions it prevents boot
> problems in the case of a dead drive.

Do you want linux to raid 0 your swap for you? ime your script will use
just one disk for swap until it overflows before bringing the next into
use, etc etc.

If you want swap striped, I think you'll need to use "swapon -p=1" or
whatever number. Otherwise I think you'll find all your swaps are
assigned different priorities. Of course, that may be what you want,
depending on how your disks are laid out.

Cheers,
Wol

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: Fwd: Installing Linux directly onto RAID6 Array...........
  2015-05-25 23:20                       ` Wols Lists
@ 2015-05-26  7:08                         ` Another Sillyname
  2015-05-26  8:06                           ` Mikael Abrahamsson
  0 siblings, 1 reply; 34+ messages in thread
From: Another Sillyname @ 2015-05-26  7:08 UTC (permalink / raw)
  To: linux-raid

Not bothered about raiding the swap thanks, my way will suffice as it
gives me maximum flexibility and resilience......I'm not really
performance driven on this project.

On 26 May 2015 at 00:20, Wols Lists <antlists@youngman.org.uk> wrote:
> On 25/05/15 22:03, Another Sillyname wrote:
>> My workaround, found on the web and re hashed, is to instead create a
>> 4GB swap partition on each drive then using the following script
>> automount the swaps at boot time using a systemd script.
>>
>> #!/bin/bash
>>
>> #  Script for service that autodetects and starts swap partitions
>>
>> for f in $(fdisk -l | grep "Linux swap" | sort | cut -d' ' -f1 | tr
>> '\n' ' '); do swapon $f; done
>>
>> as it only 'finds' swaps on active partitions it prevents boot
>> problems in the case of a dead drive.
>
> Do you want linux to raid 0 your swap for you? ime your script will use
> just one disk for swap until it overflows before bringing the next into
> use, etc etc.
>
> If you want swap striped, I think you'll need to use "swapon -p=1" or
> whatever number. Otherwise I think you'll find all your swaps are
> assigned different priorities. Of course, that may be what you want,
> depending on how your disks are laid out.
>
> Cheers,
> Wol

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: Fwd: Installing Linux directly onto RAID6 Array...........
  2015-05-26  7:08                         ` Another Sillyname
@ 2015-05-26  8:06                           ` Mikael Abrahamsson
  2015-05-26 11:18                             ` Another Sillyname
  0 siblings, 1 reply; 34+ messages in thread
From: Mikael Abrahamsson @ 2015-05-26  8:06 UTC (permalink / raw)
  To: Another Sillyname; +Cc: linux-raid

On Tue, 26 May 2015, Another Sillyname wrote:

> Not bothered about raiding the swap thanks, my way will suffice as it
> gives me maximum flexibility and resilience......I'm not really
> performance driven on this project.

I don't see how running swap natively on the drives gives "maximum 
resilience". Higher resilience is gained by running raid1 for swap.

-- 
Mikael Abrahamsson    email: swmike@swm.pp.se

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: Installing Linux directly onto RAID6 Array...........
  2015-05-24  9:08         ` Another Sillyname
  2015-05-24  9:46           ` Mikael Abrahamsson
@ 2015-05-26  8:29           ` NeilBrown
  2015-05-26 13:18             ` Another Sillyname
  1 sibling, 1 reply; 34+ messages in thread
From: NeilBrown @ 2015-05-26  8:29 UTC (permalink / raw)
  To: Another Sillyname; +Cc: Mikael Abrahamsson, linux-raid

[-- Attachment #1: Type: text/plain, Size: 3954 bytes --]

On Sun, 24 May 2015 10:08:59 +0100 Another Sillyname
<anothersname@googlemail.com> wrote:

> I suspect you're correct in that I'll end up with the boot partitions
> being in RAID1 and the data in RAID6, however I am seriously
> considering having the boot in RAID6 as well...if I can integrate the
> mdadm.conf into the initramfs properly I can't see a reason not to do
> this?

mdadm.conf is largely a non-issue.  You don't need an mdadm.conf to assemble
your array.  All the raid configuration lives in the raid metadata.
All you need is for your initrd to know what device contains your root
filesystem (preferably by UUID) so that when mdadm finds that array, the
initrd code can mount it for you.

I believe that GRUB2 can load an initrd and kernel from a filesystem on an
mdraid device, but I don't know where the boot sector would load GRUB2 from.

md's v1.2 metadata leaves 4K at the start of each device.  If GRUB2 fits in
there, then it could certainly load, assemble the RAID6, then pull the files
off your root filesystem.  But I doubt it.

If GRUB tries to put the boot loader anywhere else, there is a good chance
that md could over-write it, as it believes that it owns all the space after
4K.

According to the documentation, GRUB2 either places the second stage in the
first 32K before the first partition, or in the filesystem at specific block
locations.
The first cannot work if md uses the whole device (works fine if md uses
partitions).
The second cannot work with RAID6 as the blocks are in locations on one
device.  This only really work for RAID1.

So feel free to try, and do report any results, but I doubt you'll get it to
work reliably.

NeilBrown

> 
> Had a look at the metadata=0.9 option but reading the info on mdadm
> metadata I think I'd prefer to have the metadata at the start of the
> drive, also it looks like metadata=1.2 has extra functionality that I
> may want to use later.
> 
> On 24 May 2015 at 09:36, Mikael Abrahamsson <swmike@swm.pp.se> wrote:
> > On Sun, 24 May 2015, Another Sillyname wrote:
> >
> >> So I now have 5 partitions.
> >>
> >> a  -  bios_boot
> >> b  -  efi
> >> c  -  boot
> >> d  -  root
> >> e  -  swap
> >>
> >> I'll be adding one more when I'm happy this is working.
> >>
> >> f  -  home
> >>
> >> 3.  Using the methods above I have now created a bootable fedora
> >> system, on a single drive in preparation to now RAID the required
> >> partitions.  However my concern comes regarding the mdadm metadata,
> >> simplistically metadata=1.2 apparently writes it's superblock to 4k
> >> after the start of the device, this is exactly where my efi partition
> >> (b above) starts, so my concern is will this superblock overwrite or
> >> mess with my current partition table?
> >>
> >> 4.  If the next stage works then I think what I'll actually end up
> >> doing is......
> >>
> >> scrub what I have now.
> >>
> >> create the arrays before running the Fedora Live installer (this
> >> assumes the installer will see /md[x] devices and allow them to be
> >> used to install to). Then incorporate the mdadm.conf data into the
> >> initramfs and regenerate initramfs.
> >>
> >> Ideas/Thoughts/Criticisms?
> >
> >
> > You don't want to run MD on the entire drive in this case, you most likely
> > want to create multiple RAID1 and RAID6 mirrors. RAID1 your boot, root and
> > swap, then run RAID6 on your home partition. Use use superblock type that
> > creates the superblock at the end for the RAID1 partitions.
> >
> > Also, you don't want to refer to "sda" when booting, you want to use
> > UUID=<uuid> in fstab, crypttab etc.
> >
> > --
> > Mikael Abrahamsson    email: swmike@swm.pp.se
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 811 bytes --]

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: Fwd: Installing Linux directly onto RAID6 Array...........
  2015-05-26  8:06                           ` Mikael Abrahamsson
@ 2015-05-26 11:18                             ` Another Sillyname
  2015-05-26 14:08                               ` Mikael Abrahamsson
  0 siblings, 1 reply; 34+ messages in thread
From: Another Sillyname @ 2015-05-26 11:18 UTC (permalink / raw)
  To: linux-raid

Mikael

Very easy to understand......any way that requires entering the swaps
into /etc/fstab therefore means that if any drive and it's contained
swap fails the reboot can fail (plus the overhead of all those uuid
numbers in fstab).  My way means that as the swaps don't get loaded
unless the drive is alive the reboot has more resilience.



On 26 May 2015 at 09:06, Mikael Abrahamsson <swmike@swm.pp.se> wrote:
> On Tue, 26 May 2015, Another Sillyname wrote:
>
>> Not bothered about raiding the swap thanks, my way will suffice as it
>> gives me maximum flexibility and resilience......I'm not really
>> performance driven on this project.
>
>
> I don't see how running swap natively on the drives gives "maximum
> resilience". Higher resilience is gained by running raid1 for swap.
>
>
> --
> Mikael Abrahamsson    email: swmike@swm.pp.se

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: Installing Linux directly onto RAID6 Array...........
  2015-05-26  8:29           ` NeilBrown
@ 2015-05-26 13:18             ` Another Sillyname
  0 siblings, 0 replies; 34+ messages in thread
From: Another Sillyname @ 2015-05-26 13:18 UTC (permalink / raw)
  To: linux-raid

Excepting that by incorporating the mdadm.conf into the initramfs I
don't need to assemble the manually array as the information is
already there.  I don't have to risk a superblock overwrite as you go
on to state as I already have the mdguid info......anyway I'm not
certain my way will work....that's the point of doing the testing.


On 26 May 2015 at 09:29, NeilBrown <neilb@suse.de> wrote:
> On Sun, 24 May 2015 10:08:59 +0100 Another Sillyname
> <anothersname@googlemail.com> wrote:
>
>> I suspect you're correct in that I'll end up with the boot partitions
>> being in RAID1 and the data in RAID6, however I am seriously
>> considering having the boot in RAID6 as well...if I can integrate the
>> mdadm.conf into the initramfs properly I can't see a reason not to do
>> this?
>
> mdadm.conf is largely a non-issue.  You don't need an mdadm.conf to assemble
> your array.  All the raid configuration lives in the raid metadata.
> All you need is for your initrd to know what device contains your root
> filesystem (preferably by UUID) so that when mdadm finds that array, the
> initrd code can mount it for you.
>
> I believe that GRUB2 can load an initrd and kernel from a filesystem on an
> mdraid device, but I don't know where the boot sector would load GRUB2 from.
>
> md's v1.2 metadata leaves 4K at the start of each device.  If GRUB2 fits in
> there, then it could certainly load, assemble the RAID6, then pull the files
> off your root filesystem.  But I doubt it.
>
> If GRUB tries to put the boot loader anywhere else, there is a good chance
> that md could over-write it, as it believes that it owns all the space after
> 4K.
>
> According to the documentation, GRUB2 either places the second stage in the
> first 32K before the first partition, or in the filesystem at specific block
> locations.
> The first cannot work if md uses the whole device (works fine if md uses
> partitions).
> The second cannot work with RAID6 as the blocks are in locations on one
> device.  This only really work for RAID1.
>
> So feel free to try, and do report any results, but I doubt you'll get it to
> work reliably.
>
> NeilBrown
>
>>
>> Had a look at the metadata=0.9 option but reading the info on mdadm
>> metadata I think I'd prefer to have the metadata at the start of the
>> drive, also it looks like metadata=1.2 has extra functionality that I
>> may want to use later.
>>
>> On 24 May 2015 at 09:36, Mikael Abrahamsson <swmike@swm.pp.se> wrote:
>> > On Sun, 24 May 2015, Another Sillyname wrote:
>> >
>> >> So I now have 5 partitions.
>> >>
>> >> a  -  bios_boot
>> >> b  -  efi
>> >> c  -  boot
>> >> d  -  root
>> >> e  -  swap
>> >>
>> >> I'll be adding one more when I'm happy this is working.
>> >>
>> >> f  -  home
>> >>
>> >> 3.  Using the methods above I have now created a bootable fedora
>> >> system, on a single drive in preparation to now RAID the required
>> >> partitions.  However my concern comes regarding the mdadm metadata,
>> >> simplistically metadata=1.2 apparently writes it's superblock to 4k
>> >> after the start of the device, this is exactly where my efi partition
>> >> (b above) starts, so my concern is will this superblock overwrite or
>> >> mess with my current partition table?
>> >>
>> >> 4.  If the next stage works then I think what I'll actually end up
>> >> doing is......
>> >>
>> >> scrub what I have now.
>> >>
>> >> create the arrays before running the Fedora Live installer (this
>> >> assumes the installer will see /md[x] devices and allow them to be
>> >> used to install to). Then incorporate the mdadm.conf data into the
>> >> initramfs and regenerate initramfs.
>> >>
>> >> Ideas/Thoughts/Criticisms?
>> >
>> >
>> > You don't want to run MD on the entire drive in this case, you most likely
>> > want to create multiple RAID1 and RAID6 mirrors. RAID1 your boot, root and
>> > swap, then run RAID6 on your home partition. Use use superblock type that
>> > creates the superblock at the end for the RAID1 partitions.
>> >
>> > Also, you don't want to refer to "sda" when booting, you want to use
>> > UUID=<uuid> in fstab, crypttab etc.
>> >
>> > --
>> > Mikael Abrahamsson    email: swmike@swm.pp.se
>> --
>> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
>> the body of a message to majordomo@vger.kernel.org
>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: Fwd: Installing Linux directly onto RAID6 Array...........
  2015-05-26 11:18                             ` Another Sillyname
@ 2015-05-26 14:08                               ` Mikael Abrahamsson
  2015-05-26 20:11                                 ` Wols Lists
  0 siblings, 1 reply; 34+ messages in thread
From: Mikael Abrahamsson @ 2015-05-26 14:08 UTC (permalink / raw)
  To: Another Sillyname; +Cc: linux-raid

On Tue, 26 May 2015, Another Sillyname wrote:

> Very easy to understand......any way that requires entering the swaps 
> into /etc/fstab therefore means that if any drive and it's contained 
> swap fails the reboot can fail (plus the overhead of all those uuid 
> numbers in fstab).  My way means that as the swaps don't get loaded 
> unless the drive is alive the reboot has more resilience.

If you have RAID1 for swap, you only need a single component drive to work 
for the RAID1 to be able to start. It also means any drive can fail and 
your swap information still works.

And I don't understand why you're referring to "overhead of all those uuid 
numbers in fstab". You're worried about 10-20 bytes extra in fstab???

-- 
Mikael Abrahamsson    email: swmike@swm.pp.se

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: Fwd: Installing Linux directly onto RAID6 Array...........
  2015-05-26 14:08                               ` Mikael Abrahamsson
@ 2015-05-26 20:11                                 ` Wols Lists
  2015-05-26 21:02                                   ` Another Sillyname
  2015-05-27  4:59                                   ` Mikael Abrahamsson
  0 siblings, 2 replies; 34+ messages in thread
From: Wols Lists @ 2015-05-26 20:11 UTC (permalink / raw)
  To: Mikael Abrahamsson, Another Sillyname; +Cc: linux-raid

On 26/05/15 15:08, Mikael Abrahamsson wrote:
> On Tue, 26 May 2015, Another Sillyname wrote:
> 
>> Very easy to understand......any way that requires entering the swaps
>> into /etc/fstab therefore means that if any drive and it's contained
>> swap fails the reboot can fail (plus the overhead of all those uuid
>> numbers in fstab).  My way means that as the swaps don't get loaded
>> unless the drive is alive the reboot has more resilience.
> 
> If you have RAID1 for swap, you only need a single component drive to
> work for the RAID1 to be able to start. It also means any drive can fail
> and your swap information still works.

And you're wasting a lot of disk space. What's important to you (that a
live swap disk shouldn't fail) is not important to me and doesn't seem
to be important to Another Sillyname.

My disk drives have 32Gb swap partitions. Overkill? Dunno. But I'd
rather linux raid 0's them for 64Gb swap than mdadm raid 1's them for
32Gb swap. I have a couple of 20Gb tmpfs partitions :-)

I don't know why Another Sillyname wants to chain his swap partitions,
but it's his choice. Maybe like me, his system gets rebooted a couple of
times a day, and failure to start is a far more real risk than failure
in use. He's running Fedora - that seems likely then ...

Cheers,
Wol

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: Fwd: Installing Linux directly onto RAID6 Array...........
  2015-05-26 20:11                                 ` Wols Lists
@ 2015-05-26 21:02                                   ` Another Sillyname
  2015-05-27  4:59                                   ` Mikael Abrahamsson
  1 sibling, 0 replies; 34+ messages in thread
From: Another Sillyname @ 2015-05-26 21:02 UTC (permalink / raw)
  To: linux-raid

Guys

Without wishing to sound like the list police (no-one expects the list
police!!) this has drifted off topic into a fairly unimportant
area.....I've made my decision and will live with the consequences for
now.  As I'm currently 10 hours into a circa 40 hour 20TB data copy
you can all rest assured nothing short of a catastrophic failure is
going to change the current setup......when I get another test box in
a couple of weeks I'll revisit.

Until then thanks for all the help but can we let this lie please.

Tony

On 26 May 2015 at 21:11, Wols Lists <antlists@youngman.org.uk> wrote:
> On 26/05/15 15:08, Mikael Abrahamsson wrote:
>> On Tue, 26 May 2015, Another Sillyname wrote:
>>
>>> Very easy to understand......any way that requires entering the swaps
>>> into /etc/fstab therefore means that if any drive and it's contained
>>> swap fails the reboot can fail (plus the overhead of all those uuid
>>> numbers in fstab).  My way means that as the swaps don't get loaded
>>> unless the drive is alive the reboot has more resilience.
>>
>> If you have RAID1 for swap, you only need a single component drive to
>> work for the RAID1 to be able to start. It also means any drive can fail
>> and your swap information still works.
>
> And you're wasting a lot of disk space. What's important to you (that a
> live swap disk shouldn't fail) is not important to me and doesn't seem
> to be important to Another Sillyname.
>
> My disk drives have 32Gb swap partitions. Overkill? Dunno. But I'd
> rather linux raid 0's them for 64Gb swap than mdadm raid 1's them for
> 32Gb swap. I have a couple of 20Gb tmpfs partitions :-)
>
> I don't know why Another Sillyname wants to chain his swap partitions,
> but it's his choice. Maybe like me, his system gets rebooted a couple of
> times a day, and failure to start is a far more real risk than failure
> in use. He's running Fedora - that seems likely then ...
>
> Cheers,
> Wol

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: Fwd: Installing Linux directly onto RAID6 Array...........
  2015-05-26 20:11                                 ` Wols Lists
  2015-05-26 21:02                                   ` Another Sillyname
@ 2015-05-27  4:59                                   ` Mikael Abrahamsson
  1 sibling, 0 replies; 34+ messages in thread
From: Mikael Abrahamsson @ 2015-05-27  4:59 UTC (permalink / raw)
  To: Wols Lists; +Cc: Another Sillyname, linux-raid

On Tue, 26 May 2015, Wols Lists wrote:

> And you're wasting a lot of disk space. What's important to you (that a 
> live swap disk shouldn't fail) is not important to me and doesn't seem 
> to be important to Another Sillyname.

I am not so sure. He seems to want "maximum resiliance", or at least 
that's what he's saying.

> I don't know why Another Sillyname wants to chain his swap partitions, 
> but it's his choice. Maybe like me, his system gets rebooted a couple of 
> times a day, and failure to start is a far more real risk than failure 
> in use. He's running Fedora - that seems likely then ...

Yes, it's his choice, but he should also make this choice understanding 
what he's doing, so it's an informed and corrent choice. When I read his 
postings, I am not so sure this is the case.

-- 
Mikael Abrahamsson    email: swmike@swm.pp.se

^ permalink raw reply	[flat|nested] 34+ messages in thread

end of thread, other threads:[~2015-05-27  4:59 UTC | newest]

Thread overview: 34+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2015-05-12 10:08 Installing Linux directly onto RAID6 Array Another Sillyname
2015-05-12 10:20 ` Rudy Zijlstra
2015-05-12 12:20   ` Phil Turmel
2015-05-12 12:31     ` Roman Mamedov
     [not found]       ` <CAOS+5GHhUoYxTTYOWU7cdN6GSdffSMGrhWHU5ZtWEjc4jEm3eg@mail.gmail.com>
2015-05-12 13:12         ` Fwd: " Another Sillyname
2015-05-12 13:42           ` Roman Mamedov
2015-05-12 13:27 ` Wilson, Jonathan
2015-05-12 14:05   ` Another Sillyname
2015-05-13  0:02 ` Adam Goryachev
     [not found]   ` <CAOS+5GEP6+7OAHkqQjeyGHAB5u-_-Vq2JWGpcOemYHdCjmR5Lg@mail.gmail.com>
2015-05-24  1:18     ` Fwd: " Another Sillyname
2015-05-24  8:36       ` Mikael Abrahamsson
2015-05-24  9:08         ` Another Sillyname
2015-05-24  9:46           ` Mikael Abrahamsson
2015-05-24 10:07             ` Another Sillyname
2015-05-24 10:35               ` Mikael Abrahamsson
2015-05-24 10:42                 ` Another Sillyname
2015-05-26  8:29           ` NeilBrown
2015-05-26 13:18             ` Another Sillyname
2015-05-24 11:12         ` Fwd: " Wols Lists
2015-05-24 11:57           ` Brad Campbell
2015-05-24 12:41             ` Wols Lists
2015-05-24 13:48               ` Brad Campbell
2015-05-24 14:06               ` Mikael Abrahamsson
2015-05-24 14:53                 ` Wols Lists
2015-05-24 14:57                   ` Mikael Abrahamsson
2015-05-25 21:03                     ` Another Sillyname
2015-05-25 23:20                       ` Wols Lists
2015-05-26  7:08                         ` Another Sillyname
2015-05-26  8:06                           ` Mikael Abrahamsson
2015-05-26 11:18                             ` Another Sillyname
2015-05-26 14:08                               ` Mikael Abrahamsson
2015-05-26 20:11                                 ` Wols Lists
2015-05-26 21:02                                   ` Another Sillyname
2015-05-27  4:59                                   ` Mikael Abrahamsson

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).