linux-raid.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* Raid 10 Issue
@ 2015-03-05 17:56 Stefan Lamby
  2015-03-05 20:07 ` Phil Turmel
  2015-03-06  8:54 ` Robin Hill
  0 siblings, 2 replies; 10+ messages in thread
From: Stefan Lamby @ 2015-03-05 17:56 UTC (permalink / raw)
  To: linux-raid@vger.kernel.org

Hello List.

I was setting up a new machine using ubuntu 14.04.02 lts using its installer,
configuring a raid 10 with 2 disks and lvm on top of it. I was using 2 disks and
now I like to add 2 more disks to the array so i want to end up with 4 disks, no
spare.

Searching the internet I found that I am not able to --grow the array with the
mdadm version this ubuntu is using (v3.2.5).
Is that right?

So I decided to build a new array that way and try to move my data afterwards,
which failed:
(Is it OK to do it that way or do you recommend another?)

root@kvm15:~# mdadm --verbose --create --level=10 --raid-devices=4 /dev/md10
missing missing /dev/sdc1 /dev/sdd1
mdadm: layout defaults to n2
mdadm: layout defaults to n2
mdadm: chunk size defaults to 512K
mdadm: /dev/sdc1 appears to be part of a raid array:
    level=raid10 devices=4 ctime=Fri Feb 27 15:49:14 2015
mdadm: /dev/sdd1 appears to be part of a raid array:
    level=raid10 devices=4 ctime=Fri Feb 27 15:49:14 2015
mdadm: size set to 1904165376K
Continue creating array? y
mdadm: Defaulting to version 1.2 metadata
mdadm: RUN_ARRAY failed: Input/output error
              <<<<<<<<<<<<<<<<<<<<<<<<<<<
root@kvm15:~# 
root@kvm15:~# 
root@kvm15:~# cat /proc/mdstat 
Personalities : [linear] [multipath] [raid0] [raid1] [raid10] [raid6] [raid5]
[raid4] 
md0 : active raid10 sdb1[1] sda1[0]
      1904165376 blocks super 1.2 2 near-copies [2/2] [UU]
      
unused devices: <none>


md0 btw. is the current (running) array.

I did a few tries to get this running. This is must be the reason why mdadm
detects already existing raid config.

The partitions for sdc and sdd are created using fdisk, they do have the same
layout like disk sdb, which looks like this:

(parted) print                                                            
Modell: ATA WDC WD20PURX-64P (scsi)
Festplatte  /dev/sdc:  2000GB
Sektorgröße (logisch/physisch): 512B/4096B
Partitionstabelle: msdos

Nummer  Anfang  Ende    Größe   Typ      Dateisystem  Flags
 1      50.4GB  2000GB  1950GB  primary               RAID


Any help is very welcome.

Thanks.
Stefan
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Raid 10 Issue
  2015-03-05 17:56 Raid 10 Issue Stefan Lamby
@ 2015-03-05 20:07 ` Phil Turmel
  2015-03-06 10:09   ` Raid 10 Issue - Swapping Data from Array to Array Stefan Lamby
  2015-03-08 14:59   ` Raid 10 Issue Wilson, Jonathan
  2015-03-06  8:54 ` Robin Hill
  1 sibling, 2 replies; 10+ messages in thread
From: Phil Turmel @ 2015-03-05 20:07 UTC (permalink / raw)
  To: Stefan Lamby, linux-raid@vger.kernel.org

On 03/05/2015 12:56 PM, Stefan Lamby wrote:
> Hello List.
> 
> I was setting up a new machine using ubuntu 14.04.02 lts using its installer,
> configuring a raid 10 with 2 disks and lvm on top of it. I was using 2 disks and
> now I like to add 2 more disks to the array so i want to end up with 4 disks, no
> spare.
> 
> Searching the internet I found that I am not able to --grow the array with the
> mdadm version this ubuntu is using (v3.2.5).
> Is that right?
> 
> So I decided to build a new array that way and try to move my data afterwards,
> which failed:
> (Is it OK to do it that way or do you recommend another?)

No, you should be able to do this.  Probably without any shutdown.
Please show the full layout of your drives, partitions, and lvm.

I suggest lsdrv[1] for working layouts.  If your email is set to use
utf8, just paste the result in a reply.

Regards,

Phil Turmel

[1] https://github.com/pturmel/lsdrv

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Raid 10 Issue
  2015-03-05 17:56 Raid 10 Issue Stefan Lamby
  2015-03-05 20:07 ` Phil Turmel
@ 2015-03-06  8:54 ` Robin Hill
  2015-03-06  9:32   ` Raid 10 Issue [SOLVED] Stefan Lamby
  1 sibling, 1 reply; 10+ messages in thread
From: Robin Hill @ 2015-03-06  8:54 UTC (permalink / raw)
  To: Stefan Lamby; +Cc: linux-raid@vger.kernel.org

[-- Attachment #1: Type: text/plain, Size: 1860 bytes --]

On Thu Mar 05, 2015 at 06:56:00PM +0100, Stefan Lamby wrote:

> Hello List.
> 
> I was setting up a new machine using ubuntu 14.04.02 lts using its installer,
> configuring a raid 10 with 2 disks and lvm on top of it. I was using 2 disks and
> now I like to add 2 more disks to the array so i want to end up with 4 disks, no
> spare.
> 
> Searching the internet I found that I am not able to --grow the array with the
> mdadm version this ubuntu is using (v3.2.5).
> Is that right?
> 
> So I decided to build a new array that way and try to move my data afterwards,
> which failed:
> (Is it OK to do it that way or do you recommend another?)
> 
> root@kvm15:~# mdadm --verbose --create --level=10 --raid-devices=4 /dev/md10
> missing missing /dev/sdc1 /dev/sdd1
> mdadm: layout defaults to n2
> mdadm: layout defaults to n2
> mdadm: chunk size defaults to 512K
> mdadm: /dev/sdc1 appears to be part of a raid array:
>     level=raid10 devices=4 ctime=Fri Feb 27 15:49:14 2015
> mdadm: /dev/sdd1 appears to be part of a raid array:
>     level=raid10 devices=4 ctime=Fri Feb 27 15:49:14 2015
> mdadm: size set to 1904165376K
> Continue creating array? y
> mdadm: Defaulting to version 1.2 metadata
> mdadm: RUN_ARRAY failed: Input/output error
>               <<<<<<<<<<<<<<<<<<<<<<<<<<<
> root@kvm15:~# 
>
IIRC, in a RAID10 setup, the redundant pair is held on adjacent drives.
You've specified two adjacent drives as missing, so the array cannot be
run. Try doing:
    mdadm --verbose --create --level=10 --raid-devices=4 /dev/md10 \
        missing /dev/sdc1 missing /dev/sdd1

HTH,
    Robin
-- 
     ___        
    ( ' }     |       Robin Hill        <robin@robinhill.me.uk> |
   / / )      | Little Jim says ....                            |
  // !!       |      "He fallen in de water !!"                 |

[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 181 bytes --]

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Raid 10 Issue [SOLVED]
  2015-03-06  8:54 ` Robin Hill
@ 2015-03-06  9:32   ` Stefan Lamby
  0 siblings, 0 replies; 10+ messages in thread
From: Stefan Lamby @ 2015-03-06  9:32 UTC (permalink / raw)
  To: Robin Hill; +Cc: linux-raid@vger.kernel.org


> Robin Hill <robin@robinhill.me.uk> hat am 6. März 2015 um 09:54 geschrieben:
>
>
> On Thu Mar 05, 2015 at 06:56:00PM +0100, Stefan Lamby wrote:
>
> > Hello List.
> >
> > I was setting up a new machine using ubuntu 14.04.02 lts using its
> > installer,
> > configuring a raid 10 with 2 disks and lvm on top of it. I was using 2 disks
> > and
> > now I like to add 2 more disks to the array so i want to end up with 4
> > disks, no
> > spare.
> >
> > Searching the internet I found that I am not able to --grow the array with
> > the
> > mdadm version this ubuntu is using (v3.2.5).
> > Is that right?
> >
> > So I decided to build a new array that way and try to move my data
> > afterwards,
> > which failed:
> > (Is it OK to do it that way or do you recommend another?)
> >
> > root@kvm15:~# mdadm --verbose --create --level=10 --raid-devices=4 /dev/md10
> > missing missing /dev/sdc1 /dev/sdd1
> > mdadm: layout defaults to n2
> > mdadm: layout defaults to n2
> > mdadm: chunk size defaults to 512K
> > mdadm: /dev/sdc1 appears to be part of a raid array:
> > level=raid10 devices=4 ctime=Fri Feb 27 15:49:14 2015
> > mdadm: /dev/sdd1 appears to be part of a raid array:
> > level=raid10 devices=4 ctime=Fri Feb 27 15:49:14 2015
> > mdadm: size set to 1904165376K
> > Continue creating array? y
> > mdadm: Defaulting to version 1.2 metadata
> > mdadm: RUN_ARRAY failed: Input/output error
> > <<<<<<<<<<<<<<<<<<<<<<<<<<<
> > root@kvm15:~#
> >
> IIRC, in a RAID10 setup, the redundant pair is held on adjacent drives.
> You've specified two adjacent drives as missing, so the array cannot be
> run. Try doing:
> mdadm --verbose --create --level=10 --raid-devices=4 /dev/md10 \
> missing /dev/sdc1 missing /dev/sdd1
>

Dang!
One shot one catch, thank you so much!

Dear Maintainer,
please take a one-to-one copy of Robins explaination and print it if another
dumb user like me tries to do the same and is caught also. This could help avoid
a lot of frustration. There are posts in the internet which claim the order of
the given devices is critical.

Thank you all for taking the time to help.
Stefan
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Raid 10 Issue - Swapping Data from Array to Array
  2015-03-05 20:07 ` Phil Turmel
@ 2015-03-06 10:09   ` Stefan Lamby
  2015-03-06 12:57     ` Phil Turmel
  2015-03-06 19:06     ` Raid 10 Issue - Booting in case raid failed Stefan Lamby
  2015-03-08 14:59   ` Raid 10 Issue Wilson, Jonathan
  1 sibling, 2 replies; 10+ messages in thread
From: Stefan Lamby @ 2015-03-06 10:09 UTC (permalink / raw)
  To: Phil Turmel, linux-raid@vger.kernel.org


> Phil Turmel <philip@turmel.org> hat am 5. März 2015 um 21:07 geschrieben:
>
>
> On 03/05/2015 12:56 PM, Stefan Lamby wrote:
> > Hello List.
> >
> > I was setting up a new machine using ubuntu 14.04.02 lts using its
> > installer,
> > configuring a raid 10 with 2 disks and lvm on top of it. I was using 2 disks
> > and
> > now I like to add 2 more disks to the array so i want to end up with 4
> > disks, no
> > spare.
> >
> > Searching the internet I found that I am not able to --grow the array with
> > the
> > mdadm version this ubuntu is using (v3.2.5).
> > Is that right?
> >
> > So I decided to build a new array that way and try to move my data
> > afterwards,
> > which failed:
> > (Is it OK to do it that way or do you recommend another?)
>
> No, you should be able to do this. Probably without any shutdown.
> Please show the full layout of your drives, partitions, and lvm.
>
> I suggest lsdrv[1] for working layouts. If your email is set to use
> utf8, just paste the result in a reply.
>
> Regards,
>
> Phil Turmel
>
> [1] https://github.com/pturmel/lsdrv
>

Hi Phil.
I like your suggestion using lsdrv. Pretty nice. Here is the output (including
the newly created array):

root@kvm15:~/lsdrv/lsdrv# ./lsdrv 
PCI [ata_piix] 00:1f.5 IDE interface: Intel Corporation 82801JI (ICH10 Family) 2
port SATA IDE Controller #2
├scsi 0:0:0:0 HL-DT-ST DVD-RAM GH60L    {K1XA5SF1137}
│└sr0 3.68g [11:0] udf 'UDF_Volume'
└scsi 1:x:x:x [Empty]
PCI [ahci] 00:1f.2 SATA controller: Intel Corporation 82801JI (ICH10 Family)
SATA AHCI Controller
├scsi 2:0:0:0 ATA      WDC WD20PURX-64P {WD-WCC4M1LPT1AE}
│└sda 1.82t [8:0] Partitioned (dos)
│ └sda1 1.77t [8:1] MD raid10,near2 (0/2) (w/ sdb1) in_sync 'kvm15:0'
{75079a2f-acb8-c475-85f8-ca430ad85c4c}
│  └md0 1.77t [9:0] MD v1.2 raid10,near2 (2) clean, 512k Chunk
{75079a2f:acb8c475:85f8ca43:0ad85c4c}
│   │               PV LVM2_member 1.01t used, 780.45g free
{2hsby0-0FOT-PPbC-il1r-ux9J-lUd2-nPHj7T}
│   └VG vg_raid10 1.77t 780.45g free {HbjouC-RgUe-YYNB-z2ns-4kzK-RwJH-RHWSWq}
│    ├dm-0 479.39g [252:0] LV home ext4 {2d67d9cc-0378-4669-9d72-7b7c7071dea8}
│    │└Mounted as /dev/mapper/vg_raid10-home @ /home
│    ├dm-1 93.13g [252:1] LV root ext4 {c14e4524-e95c-45c2-bfa0-75d529ed48fe}
│    │└Mounted as /dev/mapper/vg_raid10-root @ /
│    ├dm-4 23.28g [252:4] LV swap swap {9e1a582f-1c88-44a2-be90-aafcb96805c7}
│    ├dm-3 46.56g [252:3] LV tmp ext4 {ac67d0d9-049c-4cf2-9a0e-591cdb6a3559}
│    │└Mounted as /dev/mapper/vg_raid10-tmp @ /tmp
│    └dm-2 393.13g [252:2] LV var ext4 {ff71c558-c1f8-4410-8e2a-dc9c77c27a03}
│     └Mounted as /dev/mapper/vg_raid10-var @ /var
├scsi 3:0:0:0 ATA      WDC WD20PURX-64P {WD-WCC4M5LAR62D}
│└sdb 1.82t [8:16] Partitioned (dos)
│ └sdb1 1.77t [8:17] MD raid10,near2 (1/2) (w/ sda1) in_sync 'kvm15:0'
{75079a2f-acb8-c475-85f8-ca430ad85c4c}
│  └md0 1.77t [9:0] MD v1.2 raid10,near2 (2) clean, 512k Chunk
{75079a2f:acb8c475:85f8ca43:0ad85c4c}
│                   PV LVM2_member 1.01t used, 780.45g free
{2hsby0-0FOT-PPbC-il1r-ux9J-lUd2-nPHj7T}
├scsi 4:0:0:0 ATA      WDC WD20PURX-64P {WD-WCC4M7YA1ANR}
│└sdc 1.82t [8:32] Partitioned (dos)
│ └sdc1 1.77t [8:33] MD raid10,near2 (1/4) (w/ sdd1) in_sync 'kvm15:10'
{c4540426-9c66-8fe2-4795-13f242d233b4}
│  └md10 3.55t [9:10] MD v1.2 raid10,near2 (4) clean DEGRADEDx2, 512k Chunk
{c4540426:9c668fe2:479513f2:42d233b4}
│                     Empty/Unknown
└scsi 5:0:0:0 ATA      WDC WD20PURX-64P {WD-WCC4M5AFRYVP}
 └sdd 1.82t [8:48] Partitioned (dos)
  └sdd1 1.77t [8:49] MD raid10,near2 (3/4) (w/ sdc1) in_sync 'kvm15:10'
{c4540426-9c66-8fe2-4795-13f242d233b4}
   └md10 3.55t [9:10] MD v1.2 raid10,near2 (4) clean DEGRADEDx2, 512k Chunk
{c4540426:9c668fe2:479513f2:42d233b4}
                      Empty/Unknown


This is what I got right now.
What do you recommend to do?

Stefan
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Raid 10 Issue - Swapping Data from Array to Array
  2015-03-06 10:09   ` Raid 10 Issue - Swapping Data from Array to Array Stefan Lamby
@ 2015-03-06 12:57     ` Phil Turmel
  2015-03-09  8:51       ` Raid 10 Issue - Swapping Data from Array to Array [SOLVED] Stefan Lamby
  2015-03-06 19:06     ` Raid 10 Issue - Booting in case raid failed Stefan Lamby
  1 sibling, 1 reply; 10+ messages in thread
From: Phil Turmel @ 2015-03-06 12:57 UTC (permalink / raw)
  To: Stefan Lamby, linux-raid@vger.kernel.org

On 03/06/2015 05:09 AM, Stefan Lamby wrote:
> 
> 
> This is what I got right now.
> What do you recommend to do?

1) pvcreate on the new array
2) vgextend to add the new array to the volume group
3) pvmove to get the data into the new array
4) vgreduce to disconnect the old array from lvm
5) pvremove to wipe the old array's lvm meta
6) stop the old array
7) mdadm --zero-superblock to clear the old members
8) mdadm --add to put those members into the new array

Regards,

Phil Turmel


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Raid 10 Issue - Booting in case raid failed
  2015-03-06 10:09   ` Raid 10 Issue - Swapping Data from Array to Array Stefan Lamby
  2015-03-06 12:57     ` Phil Turmel
@ 2015-03-06 19:06     ` Stefan Lamby
  2015-03-06 20:12       ` Phil Turmel
  1 sibling, 1 reply; 10+ messages in thread
From: Stefan Lamby @ 2015-03-06 19:06 UTC (permalink / raw)
  To: linux-raid@vger.kernel.org

Hi list.

If everything will work out OK, I will end up with an raid 10 array with 4
devices.

My partition design and layout structure will be found at the end, if needed.

There are a few questions left for me in case I have to boot with a failed disk:

1) As you might have seen from the partition design, only partition sda1 has the
boot flag set. As far as I guess, the ubuntu installer was using grub-install
only for sda. I am kind of afraid what will happen, in case sda will fail in the
future. Will it be a good idea to grub-install to all the other devices also?
2) What about the boot flag, if I need to grub-install the other devices also?
Should it be O or 1? Do I have to leave it set to false and in case things go
wrong boot from a live cd and set it to on to boot from another device?

What do you recommend?

Thanks
Stefan




Here is my layout - please do not care

root@kvm15:~# fdisk -l /dev/sda

Disk /dev/sda: 2000.4 GB, 2000398934016 bytes
255 Köpfe, 63 Sektoren/Spur, 243201 Zylinder, zusammen 3907029168 Sektoren
Einheiten = Sektoren von 1 × 512 = 512 Bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Festplattenidentifikation: 0x00071c2b

   Gerät  boot.     Anfang        Ende     Blöcke   Id  System
/dev/sda1   *    98435072  3907028991  1904296960   fd  Linux raid autodetect

Here as an example for all other disks

Disk /dev/sdb: 2000.4 GB, 2000398934016 bytes
255 Köpfe, 63 Sektoren/Spur, 243201 Zylinder, zusammen 3907029168 Sektoren
Einheiten = Sektoren von 1 × 512 = 512 Bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Festplattenidentifikation: 0x0008624b

   Gerät  boot.     Anfang        Ende     Blöcke   Id  System
/dev/sdb1        98435072  3907028991  1904296960   fd  Linux raid autodetect

PCI [ahci] 00:1f.2 SATA controller: Intel Corporation 82801JI (ICH10 Family)
SATA AHCI Controller
├scsi 2:0:0:0 ATA      WDC WD20PURX-64P {WD-WCC4M1LPT1AE}
│└sda 1.82t [8:0] Partitioned (dos)
│ └sda1 1.77t [8:1] MD raid10,near2 (0/2) (w/ sdb1) in_sync 'kvm15:0'
{75079a2f-acb8-c475-85f8-ca430ad85c4c}
│  └md0 1.77t [9:0] MD v1.2 raid10,near2 (2) clean, 512k Chunk
{75079a2f:acb8c475:85f8ca43:0ad85c4c}
│   │               PV LVM2_member 1.01t used, 780.45g free
{2hsby0-0FOT-PPbC-il1r-ux9J-lUd2-nPHj7T}
│   └VG vg_raid10 5.32t (w/ md10) 3.30t free
{HbjouC-RgUe-YYNB-z2ns-4kzK-RwJH-RHWSWq}
│    ├dm-0 479.39g [252:0] LV home ext4 {2d67d9cc-0378-4669-9d72-7b7c7071dea8}
│    │└Mounted as /dev/mapper/vg_raid10-home @ /home
│    ├dm-1 93.13g [252:1] LV root ext4 {c14e4524-e95c-45c2-bfa0-75d529ed48fe}
│    │└Mounted as /dev/mapper/vg_raid10-root @ /
│    ├dm-4 23.28g [252:4] LV swap swap {9e1a582f-1c88-44a2-be90-aafcb96805c7}
│    ├dm-3 46.56g [252:3] LV tmp ext4 {ac67d0d9-049c-4cf2-9a0e-591cdb6a3559}
│    │└Mounted as /dev/mapper/vg_raid10-tmp @ /tmp
│    └dm-2 393.13g [252:2] LV var ext4 {ff71c558-c1f8-4410-8e2a-dc9c77c27a03}
│     └Mounted as /dev/mapper/vg_raid10-var @ /var
├scsi 3:0:0:0 ATA      WDC WD20PURX-64P {WD-WCC4M5LAR62D}
│└sdb 1.82t [8:16] Partitioned (dos)
│ └sdb1 1.77t [8:17] MD raid10,near2 (1/2) (w/ sda1) in_sync 'kvm15:0'
{75079a2f-acb8-c475-85f8-ca430ad85c4c}
│  └md0 1.77t [9:0] MD v1.2 raid10,near2 (2) clean, 512k Chunk
{75079a2f:acb8c475:85f8ca43:0ad85c4c}
│                   PV LVM2_member 1.01t used, 780.45g free
{2hsby0-0FOT-PPbC-il1r-ux9J-lUd2-nPHj7T}
├scsi 4:0:0:0 ATA      WDC WD20PURX-64P {WD-WCC4M7YA1ANR}
│└sdc 1.82t [8:32] Partitioned (dos)
│ └sdc1 1.77t [8:33] MD raid10,near2 (1/4) (w/ sdd1) in_sync 'kvm15:10'
{c4540426-9c66-8fe2-4795-13f242d233b4}
│  └md10 3.55t [9:10] MD v1.2 raid10,near2 (4) active DEGRADEDx2, 512k Chunk
{c4540426:9c668fe2:479513f2:42d233b4}
│   │                 PV LVM2_member 1.01t used, 2.54t free
{wYV8fH-uOp2-E88P-EIp5-6U33-chYg-S94vvy}
│   └VG vg_raid10 5.32t (w/ md0) 3.30t free
{HbjouC-RgUe-YYNB-z2ns-4kzK-RwJH-RHWSWq}
└scsi 5:0:0:0 ATA      WDC WD20PURX-64P {WD-WCC4M5AFRYVP}
 └sdd 1.82t [8:48] Partitioned (dos)
  └sdd1 1.77t [8:49] MD raid10,near2 (3/4) (w/ sdc1) in_sync 'kvm15:10'
{c4540426-9c66-8fe2-4795-13f242d233b4}
   └md10 3.55t [9:10] MD v1.2 raid10,near2 (4) active DEGRADEDx2, 512k Chunk
{c4540426:9c668fe2:479513f2:42d233b4}
                      PV LVM2_member 1.01t used, 2.54t free
{wYV8fH-uOp2-E88P-EIp5-6U33-chYg-S94vvy}
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Raid 10 Issue - Booting in case raid failed
  2015-03-06 19:06     ` Raid 10 Issue - Booting in case raid failed Stefan Lamby
@ 2015-03-06 20:12       ` Phil Turmel
  0 siblings, 0 replies; 10+ messages in thread
From: Phil Turmel @ 2015-03-06 20:12 UTC (permalink / raw)
  To: Stefan Lamby, linux-raid@vger.kernel.org

On 03/06/2015 02:06 PM, Stefan Lamby wrote:
> Hi list.
> 
> If everything will work out OK, I will end up with an raid 10 array with 4
> devices.
> 
> My partition design and layout structure will be found at the end, if needed.
> 
> There are a few questions left for me in case I have to boot with a failed disk:
> 
> 1) As you might have seen from the partition design, only partition sda1 has the
> boot flag set. As far as I guess, the ubuntu installer was using grub-install
> only for sda. I am kind of afraid what will happen, in case sda will fail in the
> future. Will it be a good idea to grub-install to all the other devices also?

Yes, sort of.

> 2) What about the boot flag, if I need to grub-install the other devices also?
> Should it be O or 1? Do I have to leave it set to false and in case things go
> wrong boot from a live cd and set it to on to boot from another device?

Set to true.

But the "sort of" comes from your reliance on grub support for MD raid,
and having the appropriate mirrors containing the boot folder.  You've
left enough space before your first partition (48g) to easily hold a
plain raid1 x4 boot partition and a raid6 root partition (use a small
chunk size for that).  Then your system could boot with any two drives
missing, and let you know what's possible with the large raid10.

Also note that this kind of boot redundancy only helps if the bad drive
is entirely missing at boot time.  If you really need boot redundancy,
you have fewer choices:  BIOS fakeraid, or hardware raid with a BIOS
extension, or EFI boot with a monolithic kernel/initramfs on each device.

HTH,

Phil

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Raid 10 Issue
  2015-03-05 20:07 ` Phil Turmel
  2015-03-06 10:09   ` Raid 10 Issue - Swapping Data from Array to Array Stefan Lamby
@ 2015-03-08 14:59   ` Wilson, Jonathan
  1 sibling, 0 replies; 10+ messages in thread
From: Wilson, Jonathan @ 2015-03-08 14:59 UTC (permalink / raw)
  To: Phil Turmel; +Cc: Stefan Lamby, linux-raid@vger.kernel.org

On Thu, 2015-03-05 at 15:07 -0500, Phil Turmel wrote:
> On 03/05/2015 12:56 PM, Stefan Lamby wrote:
> > Hello List.
> > 
> > I was setting up a new machine using ubuntu 14.04.02 lts using its installer,
> > configuring a raid 10 with 2 disks and lvm on top of it. I was using 2 disks and
> > now I like to add 2 more disks to the array so i want to end up with 4 disks, no
> > spare.
> > 
> > Searching the internet I found that I am not able to --grow the array with the
> > mdadm version this ubuntu is using (v3.2.5).
> > Is that right?
> > 
> > So I decided to build a new array that way and try to move my data afterwards,
> > which failed:
> > (Is it OK to do it that way or do you recommend another?)
> 
> No, you should be able to do this.  Probably without any shutdown.
> Please show the full layout of your drives, partitions, and lvm.
> 
> I suggest lsdrv[1] for working layouts.  If your email is set to use
> utf8, just paste the result in a reply.
> 
> Regards,
> 
> Phil Turmel
> 
> [1] https://github.com/pturmel/lsdrv

OT: What a fantastic little script, a couple of my biggest annoyances
with my set up was that it was a pain to track "os disk designation" to
"serial no." making identifying the physical device a chore. 

Also one of my cheap "4 port sata" cards doesn't identify port number on
the device and while it was possible to work it out by tracing device
serial-sata cable-socket then mentally comparing sd*-device serial=port
it was something I was putting off as "a pain" but also being bugged by
my md member devices numbers being out of whack with my sd designations:
sda[4] sdb[2] sdc[1] sdd[3] and with 12 devices the last thing you want
when something is failing is to have to hunt around to work out which
device physically relates to which os identified device. 

With this I can now easily mark my disks with a simple set of
identification, likewise the cable on both ends, and make a sketch of
ports and their sequences and which disk in which slot relates.

Obviously if I was starting my system from scratch I would have done all
this from the get go... but after 4 cases, various upgrades and
additions, differing numbers of MB sata ports, and various sata cards
and multiple variations of md's/partition layouts it all became a huge
muddle and one elegant script has made it simple to finally do what I
have put of for far to long.

> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> 



^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Raid 10 Issue - Swapping Data from Array to Array [SOLVED]
  2015-03-06 12:57     ` Phil Turmel
@ 2015-03-09  8:51       ` Stefan Lamby
  0 siblings, 0 replies; 10+ messages in thread
From: Stefan Lamby @ 2015-03-09  8:51 UTC (permalink / raw)
  To: Phil Turmel, linux-raid@vger.kernel.org

Hi Phil.
I did what you suggested and ended up with a working raid 10 array.

Thank you so much for your support and time.
God blees you.
Stefan


> Phil Turmel <philip@turmel.org> hat am 6. März 2015 um 13:57 geschrieben:
>
>
> On 03/06/2015 05:09 AM, Stefan Lamby wrote:
> >
> >
> > This is what I got right now.
> > What do you recommend to do?
>
> 1) pvcreate on the new array
> 2) vgextend to add the new array to the volume group
> 3) pvmove to get the data into the new array
> 4) vgreduce to disconnect the old array from lvm
> 5) pvremove to wipe the old array's lvm meta
> 6) stop the old array
> 7) mdadm --zero-superblock to clear the old members
> 8) mdadm --add to put those members into the new array
>
> Regards,
>
> Phil Turmel
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
>
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2015-03-09  8:51 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2015-03-05 17:56 Raid 10 Issue Stefan Lamby
2015-03-05 20:07 ` Phil Turmel
2015-03-06 10:09   ` Raid 10 Issue - Swapping Data from Array to Array Stefan Lamby
2015-03-06 12:57     ` Phil Turmel
2015-03-09  8:51       ` Raid 10 Issue - Swapping Data from Array to Array [SOLVED] Stefan Lamby
2015-03-06 19:06     ` Raid 10 Issue - Booting in case raid failed Stefan Lamby
2015-03-06 20:12       ` Phil Turmel
2015-03-08 14:59   ` Raid 10 Issue Wilson, Jonathan
2015-03-06  8:54 ` Robin Hill
2015-03-06  9:32   ` Raid 10 Issue [SOLVED] Stefan Lamby

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).