From: Andreas Boman <aboman@midgaard.us>
To: Phil Turmel <philip@turmel.org>
Cc: linux-raid <linux-raid@vger.kernel.org>
Subject: Re: Failed during rebuild (raid5)
Date: Mon, 06 May 2013 21:14:18 -0400 [thread overview]
Message-ID: <5188556A.7050605@midgaard.us> (raw)
In-Reply-To: <51884D49.3090602@turmel.org>
On 05/06/2013 08:39 PM, Phil Turmel wrote:
> On 05/06/2013 04:54 PM, Andreas Boman wrote:
>> On 05/06/2013 08:36 AM, Phil Turmel wrote:
>
> [trim /]
<snip>
>
>
> Hmmm. v0.90 is at the end of the member device. Does your partition go
> all the way to the end? Please show your partition tables:
>
> fdisk -lu /dev/sd[bcdefg]
fdisk -lu /dev/sd[bcdefg]
Disk /dev/sdb: 1500.3 GB, 1500301910016 bytes
255 heads, 63 sectors/track, 182401 cylinders, total 2930277168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x3d1e17f0
Device Boot Start End Blocks Id System
/dev/sdb1 63 2930272064 1465136001 fd Linux raid
autodetect
Disk /dev/sdc: 1500.3 GB, 1500301910016 bytes
255 heads, 63 sectors/track, 182401 cylinders, total 2930277168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
Device Boot Start End Blocks Id System
/dev/sdc1 63 2930272064 1465136001 fd Linux raid
autodetect
Disk /dev/sdd: 1500.3 GB, 1500301910016 bytes
255 heads, 63 sectors/track, 182401 cylinders, total 2930277168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
Device Boot Start End Blocks Id System
/dev/sdd1 63 2930272064 1465136001 fd Linux raid
autodetect
Disk /dev/sde: 1500.3 GB, 1500301910016 bytes
255 heads, 63 sectors/track, 182401 cylinders, total 2930277168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x36cc19da
Device Boot Start End Blocks Id System
/dev/sde1 63 2930272064 1465136001 fd Linux raid
autodetect
Disk /dev/sdf: 3000.6 GB, 3000592982016 bytes
255 heads, 63 sectors/track, 364801 cylinders, total 5860533168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk identifier: 0x3d1e17f0
Device Boot Start End Blocks Id System
/dev/sdf1 63 2930272064 1465136001 fd Linux raid
autodetect
Partition 1 does not start on physical sector boundary.
Disk /dev/sdg: 3000.6 GB, 3000592982016 bytes
255 heads, 63 sectors/track, 364801 cylinders, total 5860533168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk identifier: 0x00000000
Device Boot Start End Blocks Id System
/dev/sdg1 63 2930272064 1465136001 fd Linux raid
autodetect
Partition 1 does not start on physical sector boundary.
>> Warning: device does not support SCT Error Recovery Control command
>
> Since these cannot be set to a short error timeout, the linux driver's
> timeout must be changed to tolerate 2+ minutes of error recovery. I
> recommend 180 seconds. This must be put in /etc/local.d/ or
> /etc/rc.local like so:
>
> # echo 180>/sys/block/sdf/device/timeout
>
> If you don't do this, "check" scrubbing will fail. And by fail, I mean
> any ordinary URE will kick drives out instead of fixing them. Search
> the archives for "scterc" and you'll find more detailed explanations
> (attached to horror stories).
Thank you! I had no idea about that or I obviously would not have bought
those disks...
<snip>
>
> I would encourage you to take your backups of critical files as soon as
> the array is running, before you add a fifth disk. Then you can add two
> disks and recover/reshape simultaneously.
Hmm.. any hints as to how to do that at the same time? That does sound
better.
Thanks you for all your help/advice Phil.
Andreas
next prev parent reply other threads:[~2013-05-07 1:14 UTC|newest]
Thread overview: 24+ messages / expand[flat|nested] mbox.gz Atom feed top
2013-05-03 11:23 Failed during rebuild (raid5) Andreas Boman
2013-05-03 11:38 ` Benjamin ESTRABAUD
2013-05-03 12:40 ` Robin Hill
2013-05-03 13:52 ` John Stoffel
2013-05-03 14:51 ` Phil Turmel
2013-05-03 16:23 ` John Stoffel
2013-05-03 16:32 ` Roman Mamedov
2013-05-04 14:48 ` maurice
2013-05-03 16:29 ` Mikael Abrahamsson
2013-05-03 19:29 ` John Stoffel
2013-05-04 4:14 ` Mikael Abrahamsson
2013-05-03 12:26 ` Ole Tange
2013-05-04 11:29 ` Andreas Boman
2013-05-05 14:00 ` Andreas Boman
2013-05-05 17:16 ` Andreas Boman
2013-05-06 1:10 ` Sam Bingner
2013-05-06 3:21 ` Phil Turmel
[not found] ` <51878BD0.9010809@midgaard.us>
2013-05-06 12:36 ` Phil Turmel
[not found] ` <5188189D.1060806@midgaard.us>
2013-05-07 0:39 ` Phil Turmel
2013-05-07 1:14 ` Andreas Boman [this message]
2013-05-07 1:46 ` Phil Turmel
2013-05-07 2:08 ` Andreas Boman
2013-05-07 2:16 ` Phil Turmel
2013-05-07 2:21 ` Andreas Boman
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=5188556A.7050605@midgaard.us \
--to=aboman@midgaard.us \
--cc=linux-raid@vger.kernel.org \
--cc=philip@turmel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).