From: "Bjørn Eikeland" <beikeland@gmail.com>
To: linux-raid@vger.kernel.org
Subject: Re: raid 5 created with 7 out of 8
Date: Mon, 10 Jan 2005 10:58:00 +0100 [thread overview]
Message-ID: <f4146e7c05011001584d07ba58@mail.gmail.com> (raw)
In-Reply-To: <f4146e7c05011001554b39fd9f@mail.gmail.com>
Hi, I dont know whats up with the "VERS = 9000" output, but its
definitively from mdadm, as it's been present at every attempt at
creating the array.
However, I've upgraded to the 2.4.28 kernel and that made no
difference, but issuing mdadm -Cf /dev/md0 -l5 -n8 -c256 /dev/hd[e-l]1
(note the "f" made all my problems go away, the array is now fully
operational)
And for the stripe size, thats just an performance experiment, seems
like 256 was the winner in the end though. I can try to recreate the
problem later today if the output of mdadm -D /dev/md0 is of any
interest in finding out if this is a bug or just me. (It should also
be mentioned that the older version of mdadm that shipped with
slackware 10 also produced the same problem)
-Bjorn
On Sun, 9 Jan 2005 22:24:27 -0500, Guy <bugzilla@watkins-home.com> wrote:
> From 1.8.1:
> This is a "development" release of mdadm. It should *not* be
> considered stable and should be used primarily for testing.
> The current "stable" version is 1.8.0.
>
> Your email shows "VERS = 9000". Was that a command line option? Or output
> from mdadm?
>
> The only other odd thing I see... You have the largest chunk size I have
> seen (-c512). But I don't know of any limits.
>
> I did create an array with this command line. No problems.
> mdadm -C /dev/md3 -l5 -n8 -c512 /dev/ram[0-7]
>
> from cat /proc/mdstat:
> md3 : active raid5 [dev 01:07][7] [dev 01:06][6] [dev 01:05][5] [dev
> 01:04][4] [dev 01:03][3] [dev 01:02][2] [dev 01:01][1] [dev 01:00][0]
> 25088 blocks level 5, 512k chunk, algorithm 2 [8/8] [UUUUUUUU]
>
> Send output of:
> mdadm -D /dev/md0
>
> I am using mdadm V1.8.0 and kernel 2.4.28.
>
> Guy
>
> -----Original Message-----
> From: linux-raid-owner@vger.kernel.org
> [mailto:linux-raid-owner@vger.kernel.org] On Behalf Of Bjørn Eikeland
> Sent: Sunday, January 09, 2005 8:39 PM
> To: linux-raid@vger.kernel.org
> Subject: raid 5 created with 7 out of 8
>
> Hi, I'm trying to set up a raid5 array using 8 ide drives (
> /dev/hd[e-l] ) but I'm having a hard time.
>
> I'm using slackware 10, kernel 2.4.26 and madm 1.8.1 (downloading
> 2.4.28 overnight now)
>
> The problem is mdadm creates the array with 7 of 8 drives up and
> running and the last as a spare and does not start recovering with the
> spare. And it will not let me remove it and re-add it. Below follows a
> script output of the whole thing (less repartitioning the drives and
> zero'ing any remaining superblocks)
>
> Any help will be greatly appreciated.
> -thanks
>
> root@filebear:~# mdadm -C /dev/md0 -l5 -n8 -c512 /dev/hd[e-l]1
> VERS = 9000
> mdadm: array /dev/md0 started.
> root@filebear:~# cat /proc/mdstat
> Personalities : [linear] [raid0] [raid1] [raid5]
> read_ahead 1024 sectors
> md0 : active raid5 hdl1[8] hdk1[6] hdj1[5] hdi1[4] hdh1[3] hdg1[2]
> hdf1[1] hde1[0]
> 1094017792 blocks level 5, 512k chunk, algorithm 2 [8/7] [UUUUUUU_]
>
> unused devices: <none>
> root@filebear:~# mdadm /dev/md0 -f /dev/hdl1
> mdadm: set /dev/hdl1 faulty in /dev/md0
> root@filebear:~# mdadm /dev/md0 -r /dev/hdl
> mdadm: hot removed /dev/hdl1
> root@filebear:~# mdadm /dev/md0 -a /dev/hdl1
> mdadm: hot add failed for /dev/hdl1: No space left on device
> root@filebear:~# mdadm /dev/md0 -f /dev/hde1
> mdadm: set /dev/hde1 faulty in /dev/md0
> root@filebear:~# mdadm /dev/md0 -r /dev/hde1
> mdadm: hot removed /dev/hde1
> root@filebear:~# mdadm /dev/md0 -a /dev/hde1
> mdadm: hot add failed for /dev/hde1: No space left on device
> -
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
>
> -
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
>
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
prev parent reply other threads:[~2005-01-10 9:58 UTC|newest]
Thread overview: 5+ messages / expand[flat|nested] mbox.gz Atom feed top
2005-01-10 1:38 raid 5 created with 7 out of 8 Bjørn Eikeland
2005-01-10 3:24 ` Guy
2005-01-10 5:27 ` Minor bugs in "mdadm --monitor --scan &" Guy
2005-01-10 7:00 ` Guy
[not found] ` <f4146e7c05011001554b39fd9f@mail.gmail.com>
2005-01-10 9:58 ` Bjørn Eikeland [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=f4146e7c05011001584d07ba58@mail.gmail.com \
--to=beikeland@gmail.com \
--cc=linux-raid@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).