From: Stefan Lamby <webmaster@peter-speer.de>
To: "linux-raid@vger.kernel.org" <linux-raid@vger.kernel.org>
Subject: Raid 10 Issue
Date: Thu, 5 Mar 2015 18:56:00 +0100 (CET) [thread overview]
Message-ID: <1888475554.226351.1425578160090.JavaMail.open-xchange@app09.ox.hosteurope.de> (raw)
Hello List.
I was setting up a new machine using ubuntu 14.04.02 lts using its installer,
configuring a raid 10 with 2 disks and lvm on top of it. I was using 2 disks and
now I like to add 2 more disks to the array so i want to end up with 4 disks, no
spare.
Searching the internet I found that I am not able to --grow the array with the
mdadm version this ubuntu is using (v3.2.5).
Is that right?
So I decided to build a new array that way and try to move my data afterwards,
which failed:
(Is it OK to do it that way or do you recommend another?)
root@kvm15:~# mdadm --verbose --create --level=10 --raid-devices=4 /dev/md10
missing missing /dev/sdc1 /dev/sdd1
mdadm: layout defaults to n2
mdadm: layout defaults to n2
mdadm: chunk size defaults to 512K
mdadm: /dev/sdc1 appears to be part of a raid array:
level=raid10 devices=4 ctime=Fri Feb 27 15:49:14 2015
mdadm: /dev/sdd1 appears to be part of a raid array:
level=raid10 devices=4 ctime=Fri Feb 27 15:49:14 2015
mdadm: size set to 1904165376K
Continue creating array? y
mdadm: Defaulting to version 1.2 metadata
mdadm: RUN_ARRAY failed: Input/output error
<<<<<<<<<<<<<<<<<<<<<<<<<<<
root@kvm15:~#
root@kvm15:~#
root@kvm15:~# cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid10] [raid6] [raid5]
[raid4]
md0 : active raid10 sdb1[1] sda1[0]
1904165376 blocks super 1.2 2 near-copies [2/2] [UU]
unused devices: <none>
md0 btw. is the current (running) array.
I did a few tries to get this running. This is must be the reason why mdadm
detects already existing raid config.
The partitions for sdc and sdd are created using fdisk, they do have the same
layout like disk sdb, which looks like this:
(parted) print
Modell: ATA WDC WD20PURX-64P (scsi)
Festplatte /dev/sdc: 2000GB
Sektorgröße (logisch/physisch): 512B/4096B
Partitionstabelle: msdos
Nummer Anfang Ende Größe Typ Dateisystem Flags
1 50.4GB 2000GB 1950GB primary RAID
Any help is very welcome.
Thanks.
Stefan
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
next reply other threads:[~2015-03-05 17:56 UTC|newest]
Thread overview: 10+ messages / expand[flat|nested] mbox.gz Atom feed top
2015-03-05 17:56 Stefan Lamby [this message]
2015-03-05 20:07 ` Raid 10 Issue Phil Turmel
2015-03-06 10:09 ` Raid 10 Issue - Swapping Data from Array to Array Stefan Lamby
2015-03-06 12:57 ` Phil Turmel
2015-03-09 8:51 ` Raid 10 Issue - Swapping Data from Array to Array [SOLVED] Stefan Lamby
2015-03-06 19:06 ` Raid 10 Issue - Booting in case raid failed Stefan Lamby
2015-03-06 20:12 ` Phil Turmel
2015-03-08 14:59 ` Raid 10 Issue Wilson, Jonathan
2015-03-06 8:54 ` Robin Hill
2015-03-06 9:32 ` Raid 10 Issue [SOLVED] Stefan Lamby
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1888475554.226351.1425578160090.JavaMail.open-xchange@app09.ox.hosteurope.de \
--to=webmaster@peter-speer.de \
--cc=linux-raid@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).