linux-raid.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Peter Rabbitson <rabbit+list@rabbit.us>
To: Bill Davidsen <davidsen@tmr.com>
Cc: linux-raid@vger.kernel.org
Subject: Re: RAID5 in sync does not populate slots sequentially, shows array as (somewhat) faulty
Date: Tue, 04 Mar 2008 11:25:43 +0100	[thread overview]
Message-ID: <47CD23A7.2000904@rabbit.us> (raw)
In-Reply-To: <47CC44C7.1040304@tmr.com>

Bill Davidsen wrote:
> Peter Rabbitson wrote:
>> Hello,
>>
>> Noticing the problems Tor Vestbø is having, I remembered that I have 
>> an array in a similar state, which I never figured out. The array has 
>> been working flawlessly for 3 months, the monthly 'check' runs come 
>> back with everything being clean. However this is how the array looks 
>> through mdadm's eyes:
> 
> I'm in agreement that something is odd about the disk numbers here, and 
> I'm suspicious because I have never seen this with 0.90 superblocks. 
> That doesn't mean it couldn't happen and I never noticed, it's certainly 
> odd that four drives wouldn't be numbered 0..3, in raid5 they are all 
> equally out of sync.
> 

After Tor Arne reported his success I figured I will simply fail/remove sda3, 
scrape it clean, and will add it back. I zeroed superblocks beforehand and 
also wrote zeros (dd  if=/dev/zero) to the drives start and end just to make 
sure everythign is off. After resync I am back at square one - the offset of 
sda3 is different than everything else and the array has one failed drive. If 
someone can shed some light I made snapshots of the superblocks[1] alongside 
with the current output of mdadm at http://rabbit.us/pool/md5_problem.tar.bz2.

[1] dd if=/dev/sdX3 of=sdX_sb count=<Data Offset> bs=512

Here is my system config:

root@Thesaurus:/arx/space/pool# fdisk -l /dev/sd[abcd]

Disk /dev/sda: 400.0 GB, 400088457216 bytes
255 heads, 63 sectors/track, 48641 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x00000000

    Device Boot      Start         End      Blocks   Id  System
/dev/sda1               1           7       56196   fd  Linux raid autodetect
/dev/sda2               8         507     4016250   fd  Linux raid autodetect
/dev/sda3             508       36407   288366750   83  Linux
/dev/sda4           36408       48641    98269605   83  Linux

Disk /dev/sdb: 320.0 GB, 320072933376 bytes
255 heads, 63 sectors/track, 38913 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x00000000

    Device Boot      Start         End      Blocks   Id  System
/dev/sdb1               1           7       56196   fd  Linux raid autodetect
/dev/sdb2               8         507     4016250   fd  Linux raid autodetect
/dev/sdb3             508       36407   288366750   83  Linux
/dev/sdb4           36408       38913    20129445   83  Linux

Disk /dev/sdc: 300.0 GB, 300090728448 bytes
255 heads, 63 sectors/track, 36483 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x00000000

    Device Boot      Start         End      Blocks   Id  System
/dev/sdc1               1           7       56196   fd  Linux raid autodetect
/dev/sdc2               8         507     4016250   fd  Linux raid autodetect
/dev/sdc3             508       36407   288366750   83  Linux
/dev/sdc4           36408       36483      610470   83  Linux

Disk /dev/sdd: 300.0 GB, 300090728448 bytes
255 heads, 63 sectors/track, 36483 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x00000000

    Device Boot      Start         End      Blocks   Id  System
/dev/sdd1               1           7       56196   fd  Linux raid autodetect
/dev/sdd2               8         507     4016250   fd  Linux raid autodetect
/dev/sdd3             508       36407   288366750   83  Linux
/dev/sdd4           36408       36483      610470   83  Linux
root@Thesaurus:/arx/space/pool#

root@Thesaurus:~# cat /proc/mdstat
Personalities : [raid0] [raid1] [raid10] [raid6] [raid5] [raid4]
md5 : active raid5 sda3[4] sdd3[3] sdc3[2] sdb3[1]
       865081344 blocks super 1.1 level 5, 2048k chunk, algorithm 2 [4/4] [UUUU]

md1 : active raid1 sdd1[3] sdc1[2] sdb1[1] sda1[0]
       56128 blocks [4/4] [UUUU]

md10 : active raid10 sdd2[3] sdc2[2] sdb2[1] sda2[0]
       5353472 blocks 1024K chunks 3 far-copies [4/4] [UUUU]

unused devices: <none>
root@Thesaurus:~#



--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

  reply	other threads:[~2008-03-04 10:25 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2008-03-03 10:42 RAID5 in sync does not populate slots sequentially, shows array as (somewhat) faulty Peter Rabbitson
2008-03-03 11:44 ` Tor Arne Vestbø
2008-03-03 18:34 ` Bill Davidsen
2008-03-04 10:25   ` Peter Rabbitson [this message]
2008-03-04 10:42     ` Tor Arne Vestbø
2008-03-04 10:52       ` Peter Rabbitson
2008-03-04 10:58         ` Tor Arne Vestbø
2008-03-06 14:51       ` Rui Santos

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=47CD23A7.2000904@rabbit.us \
    --to=rabbit+list@rabbit.us \
    --cc=davidsen@tmr.com \
    --cc=linux-raid@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).