* Crash during raid6 reshape, now cannot restart?
@ 2010-12-10 17:05 Phil Genera
2010-12-10 20:43 ` Neil Brown
0 siblings, 1 reply; 5+ messages in thread
From: Phil Genera @ 2010-12-10 17:05 UTC (permalink / raw)
To: linux-raid
[-- Attachment #1: Type: text/plain, Size: 3312 bytes --]
I had a power failure during a large raid6 reshape (6->8 disks) on one
of my arm systems last night, and can't seem to get it going again.
I did this:
# mdadm --grow --backup-file=./backup.mdadm --array-size=8 /dev/md0
which (I've now noticed) didn't seem to write a backup file. There was
a read error during the reshape, but it claimed recovery:
Dec 9 20:48:07 love kernel: sd 2:0:0:0: [sda] Unhandled sense code
Dec 9 20:48:07 love kernel: sd 2:0:0:0: [sda] Result: hostbyte=DID_OK
driverbyte=DRIVER_SENSE
Dec 9 20:48:07 love kernel: sd 2:0:0:0: [sda] Sense Key : Medium
Error [current]
Dec 9 20:48:07 love kernel: sd 2:0:0:0: [sda] Add. Sense: Unrecovered
read error
Dec 9 20:48:07 love kernel: sd 2:0:0:0: [sda] CDB: Read(10): 28 00 00
02 09 60 00 00 20 00
Dec 9 20:48:07 love kernel: end_request: I/O error, dev sda, sector 133472
Dec 9 20:48:08 love kernel: raid5:md0: read error corrected (8
sectors at 133472 on sda)
Dec 9 20:48:08 love kernel: raid5:md0: read error corrected (8
sectors at 133480 on sda)
Dec 9 20:48:08 love kernel: raid5:md0: read error corrected (8
sectors at 133488 on sda)
Dec 9 20:48:08 love kernel: raid5:md0: read error corrected (8
sectors at 133496 on sda)
Some time during the night, the electricity went away, and on reboot I get this:
raid5: reshape_position too early for auto-recovery - aborting.
as well as when I try to assemble the array manually. There's nothing
critical I don't have backed up, but there's a lot of TV on there I
was planning to watch :).
Any good ideas? I'd sure appreciate some help. I'm guessing this is
just a crash in the critical section, and without a backup file I'm
screwed. I'm surprised the backup file is still needed 200gb into the
reshape though. Thanks!
Versions & status:
# cat /proc/mdstat
Personalities : [raid1] [raid6] [raid5] [raid4]
md0 : inactive sdg[0] sdj[7] sdi[6] sdf[5] sde[4] sdd[3] sdc[2] sdh[1]
3125690368 blocks super 0.91
# uname -a
Linux love 2.6.32-5-kirkwood #1 Sun Oct 31 11:19:32 UTC 2010 armv5tel GNU/Linux
# mdadm --version
mdadm - v3.1.4 - 31st August 2010
More details (and --examine of all disks attached):
# mdadm --detail /dev/md0
/dev/md0:
Version : 0.91
Creation Time : Fri Oct 9 09:32:08 2009
Raid Level : raid6
Used Dev Size : 390711296 (372.61 GiB 400.09 GB)
Raid Devices : 8
Total Devices : 8
Preferred Minor : 0
Persistence : Superblock is persistent
Update Time : Fri Dec 10 05:52:35 2010
State : active, Not Started
Active Devices : 8
Working Devices : 8
Failed Devices : 0
Spare Devices : 0
Layout : left-symmetric
Chunk Size : 64K
Delta Devices : 2, (6->8)
UUID : 81ddccd8:5abf5b03:181548d9:47e92625
Events : 0.1048248
Number Major Minor RaidDevice State
0 8 96 0 active sync /dev/sdg
1 8 112 1 active sync /dev/sdh
2 8 32 2 active sync /dev/sdc
3 8 48 3 active sync /dev/sdd
4 8 64 4 active sync /dev/sde
5 8 80 5 active sync /dev/sdf
6 8 128 6 active sync /dev/sdi
7 8 144 7 active sync /dev/sdj
--
Phil
[-- Attachment #2: examine.txt --]
[-- Type: text/plain, Size: 10640 bytes --]
/dev/sdc:
Magic : a92b4efc
Version : 0.91.00
UUID : 81ddccd8:5abf5b03:181548d9:47e92625
Creation Time : Fri Oct 9 09:32:08 2009
Raid Level : raid6
Used Dev Size : 390711296 (372.61 GiB 400.09 GB)
Array Size : 2344267776 (2235.67 GiB 2400.53 GB)
Raid Devices : 8
Total Devices : 8
Preferred Minor : 0
Reshape pos'n : 306680832 (292.47 GiB 314.04 GB)
Delta Devices : 2 (6->8)
Update Time : Fri Dec 10 05:52:35 2010
State : clean
Active Devices : 8
Working Devices : 8
Failed Devices : 0
Spare Devices : 0
Checksum : b9936c7a - correct
Events : 1048248
Layout : left-symmetric
Chunk Size : 64K
Number Major Minor RaidDevice State
this 2 8 0 2 active sync /dev/sda
0 0 8 64 0 active sync /dev/sde
1 1 8 80 1 active sync /dev/sdf
2 2 8 0 2 active sync /dev/sda
3 3 8 16 3 active sync /dev/sdb
4 4 8 32 4 active sync /dev/sdc
5 5 8 48 5 active sync /dev/sdd
6 6 8 96 6 active sync /dev/sdg
7 7 8 112 7 active sync /dev/sdh
/dev/sdd:
Magic : a92b4efc
Version : 0.91.00
UUID : 81ddccd8:5abf5b03:181548d9:47e92625
Creation Time : Fri Oct 9 09:32:08 2009
Raid Level : raid6
Used Dev Size : 390711296 (372.61 GiB 400.09 GB)
Array Size : 2344267776 (2235.67 GiB 2400.53 GB)
Raid Devices : 8
Total Devices : 8
Preferred Minor : 0
Reshape pos'n : 306680832 (292.47 GiB 314.04 GB)
Delta Devices : 2 (6->8)
Update Time : Fri Dec 10 05:52:35 2010
State : clean
Active Devices : 8
Working Devices : 8
Failed Devices : 0
Spare Devices : 0
Checksum : b9936c8c - correct
Events : 1048248
Layout : left-symmetric
Chunk Size : 64K
Number Major Minor RaidDevice State
this 3 8 16 3 active sync /dev/sdb
0 0 8 64 0 active sync /dev/sde
1 1 8 80 1 active sync /dev/sdf
2 2 8 0 2 active sync /dev/sda
3 3 8 16 3 active sync /dev/sdb
4 4 8 32 4 active sync /dev/sdc
5 5 8 48 5 active sync /dev/sdd
6 6 8 96 6 active sync /dev/sdg
7 7 8 112 7 active sync /dev/sdh
/dev/sde:
Magic : a92b4efc
Version : 0.91.00
UUID : 81ddccd8:5abf5b03:181548d9:47e92625
Creation Time : Fri Oct 9 09:32:08 2009
Raid Level : raid6
Used Dev Size : 390711296 (372.61 GiB 400.09 GB)
Array Size : 2344267776 (2235.67 GiB 2400.53 GB)
Raid Devices : 8
Total Devices : 8
Preferred Minor : 0
Reshape pos'n : 306680832 (292.47 GiB 314.04 GB)
Delta Devices : 2 (6->8)
Update Time : Fri Dec 10 05:52:35 2010
State : clean
Active Devices : 8
Working Devices : 8
Failed Devices : 0
Spare Devices : 0
Checksum : b9936c9e - correct
Events : 1048248
Layout : left-symmetric
Chunk Size : 64K
Number Major Minor RaidDevice State
this 4 8 32 4 active sync /dev/sdc
0 0 8 64 0 active sync /dev/sde
1 1 8 80 1 active sync /dev/sdf
2 2 8 0 2 active sync /dev/sda
3 3 8 16 3 active sync /dev/sdb
4 4 8 32 4 active sync /dev/sdc
5 5 8 48 5 active sync /dev/sdd
6 6 8 96 6 active sync /dev/sdg
7 7 8 112 7 active sync /dev/sdh
/dev/sdf:
Magic : a92b4efc
Version : 0.91.00
UUID : 81ddccd8:5abf5b03:181548d9:47e92625
Creation Time : Fri Oct 9 09:32:08 2009
Raid Level : raid6
Used Dev Size : 390711296 (372.61 GiB 400.09 GB)
Array Size : 2344267776 (2235.67 GiB 2400.53 GB)
Raid Devices : 8
Total Devices : 8
Preferred Minor : 0
Reshape pos'n : 306680832 (292.47 GiB 314.04 GB)
Delta Devices : 2 (6->8)
Update Time : Fri Dec 10 05:52:35 2010
State : clean
Active Devices : 8
Working Devices : 8
Failed Devices : 0
Spare Devices : 0
Checksum : b9936cb0 - correct
Events : 1048248
Layout : left-symmetric
Chunk Size : 64K
Number Major Minor RaidDevice State
this 5 8 48 5 active sync /dev/sdd
0 0 8 64 0 active sync /dev/sde
1 1 8 80 1 active sync /dev/sdf
2 2 8 0 2 active sync /dev/sda
3 3 8 16 3 active sync /dev/sdb
4 4 8 32 4 active sync /dev/sdc
5 5 8 48 5 active sync /dev/sdd
6 6 8 96 6 active sync /dev/sdg
7 7 8 112 7 active sync /dev/sdh
/dev/sdg:
Magic : a92b4efc
Version : 0.91.00
UUID : 81ddccd8:5abf5b03:181548d9:47e92625
Creation Time : Fri Oct 9 09:32:08 2009
Raid Level : raid6
Used Dev Size : 390711296 (372.61 GiB 400.09 GB)
Array Size : 2344267776 (2235.67 GiB 2400.53 GB)
Raid Devices : 8
Total Devices : 8
Preferred Minor : 0
Reshape pos'n : 306680832 (292.47 GiB 314.04 GB)
Delta Devices : 2 (6->8)
Update Time : Fri Dec 10 05:52:35 2010
State : clean
Active Devices : 8
Working Devices : 8
Failed Devices : 0
Spare Devices : 0
Checksum : b9936cb6 - correct
Events : 1048248
Layout : left-symmetric
Chunk Size : 64K
Number Major Minor RaidDevice State
this 0 8 64 0 active sync /dev/sde
0 0 8 64 0 active sync /dev/sde
1 1 8 80 1 active sync /dev/sdf
2 2 8 0 2 active sync /dev/sda
3 3 8 16 3 active sync /dev/sdb
4 4 8 32 4 active sync /dev/sdc
5 5 8 48 5 active sync /dev/sdd
6 6 8 96 6 active sync /dev/sdg
7 7 8 112 7 active sync /dev/sdh
/dev/sdh:
Magic : a92b4efc
Version : 0.91.00
UUID : 81ddccd8:5abf5b03:181548d9:47e92625
Creation Time : Fri Oct 9 09:32:08 2009
Raid Level : raid6
Used Dev Size : 390711296 (372.61 GiB 400.09 GB)
Array Size : 2344267776 (2235.67 GiB 2400.53 GB)
Raid Devices : 8
Total Devices : 8
Preferred Minor : 0
Reshape pos'n : 306680832 (292.47 GiB 314.04 GB)
Delta Devices : 2 (6->8)
Update Time : Fri Dec 10 05:52:35 2010
State : clean
Active Devices : 8
Working Devices : 8
Failed Devices : 0
Spare Devices : 0
Checksum : b9936cc8 - correct
Events : 1048248
Layout : left-symmetric
Chunk Size : 64K
Number Major Minor RaidDevice State
this 1 8 80 1 active sync /dev/sdf
0 0 8 64 0 active sync /dev/sde
1 1 8 80 1 active sync /dev/sdf
2 2 8 0 2 active sync /dev/sda
3 3 8 16 3 active sync /dev/sdb
4 4 8 32 4 active sync /dev/sdc
5 5 8 48 5 active sync /dev/sdd
6 6 8 96 6 active sync /dev/sdg
7 7 8 112 7 active sync /dev/sdh
/dev/sdi:
Magic : a92b4efc
Version : 0.91.00
UUID : 81ddccd8:5abf5b03:181548d9:47e92625
Creation Time : Fri Oct 9 09:32:08 2009
Raid Level : raid6
Used Dev Size : 390711296 (372.61 GiB 400.09 GB)
Array Size : 2344267776 (2235.67 GiB 2400.53 GB)
Raid Devices : 8
Total Devices : 8
Preferred Minor : 0
Reshape pos'n : 306680832 (292.47 GiB 314.04 GB)
Delta Devices : 2 (6->8)
Update Time : Fri Dec 10 05:52:35 2010
State : clean
Active Devices : 8
Working Devices : 8
Failed Devices : 0
Spare Devices : 0
Checksum : b9936ce2 - correct
Events : 1048248
Layout : left-symmetric
Chunk Size : 64K
Number Major Minor RaidDevice State
this 6 8 96 6 active sync /dev/sdg
0 0 8 64 0 active sync /dev/sde
1 1 8 80 1 active sync /dev/sdf
2 2 8 0 2 active sync /dev/sda
3 3 8 16 3 active sync /dev/sdb
4 4 8 32 4 active sync /dev/sdc
5 5 8 48 5 active sync /dev/sdd
6 6 8 96 6 active sync /dev/sdg
7 7 8 112 7 active sync /dev/sdh
/dev/sdj:
Magic : a92b4efc
Version : 0.91.00
UUID : 81ddccd8:5abf5b03:181548d9:47e92625
Creation Time : Fri Oct 9 09:32:08 2009
Raid Level : raid6
Used Dev Size : 390711296 (372.61 GiB 400.09 GB)
Array Size : 2344267776 (2235.67 GiB 2400.53 GB)
Raid Devices : 8
Total Devices : 8
Preferred Minor : 0
Reshape pos'n : 306680832 (292.47 GiB 314.04 GB)
Delta Devices : 2 (6->8)
Update Time : Fri Dec 10 05:52:35 2010
State : clean
Active Devices : 8
Working Devices : 8
Failed Devices : 0
Spare Devices : 0
Checksum : b9936cf4 - correct
Events : 1048248
Layout : left-symmetric
Chunk Size : 64K
Number Major Minor RaidDevice State
this 7 8 112 7 active sync /dev/sdh
0 0 8 64 0 active sync /dev/sde
1 1 8 80 1 active sync /dev/sdf
2 2 8 0 2 active sync /dev/sda
3 3 8 16 3 active sync /dev/sdb
4 4 8 32 4 active sync /dev/sdc
5 5 8 48 5 active sync /dev/sdd
6 6 8 96 6 active sync /dev/sdg
7 7 8 112 7 active sync /dev/sdh
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: Crash during raid6 reshape, now cannot restart?
2010-12-10 17:05 Crash during raid6 reshape, now cannot restart? Phil Genera
@ 2010-12-10 20:43 ` Neil Brown
2010-12-10 22:02 ` Neil Brown
0 siblings, 1 reply; 5+ messages in thread
From: Neil Brown @ 2010-12-10 20:43 UTC (permalink / raw)
To: Phil Genera; +Cc: linux-raid
On Fri, 10 Dec 2010 09:05:47 -0800 Phil Genera <pg@fivesevenfive.org> wrote:
> I had a power failure during a large raid6 reshape (6->8 disks) on one
> of my arm systems last night, and can't seem to get it going again.
>
> I did this:
> # mdadm --grow --backup-file=./backup.mdadm --array-size=8 /dev/md0
>
> which (I've now noticed) didn't seem to write a backup file. There was
> a read error during the reshape, but it claimed recovery:
> Dec 9 20:48:07 love kernel: sd 2:0:0:0: [sda] Unhandled sense code
> Dec 9 20:48:07 love kernel: sd 2:0:0:0: [sda] Result: hostbyte=DID_OK
> driverbyte=DRIVER_SENSE
> Dec 9 20:48:07 love kernel: sd 2:0:0:0: [sda] Sense Key : Medium
> Error [current]
> Dec 9 20:48:07 love kernel: sd 2:0:0:0: [sda] Add. Sense: Unrecovered
> read error
> Dec 9 20:48:07 love kernel: sd 2:0:0:0: [sda] CDB: Read(10): 28 00 00
> 02 09 60 00 00 20 00
> Dec 9 20:48:07 love kernel: end_request: I/O error, dev sda, sector 133472
> Dec 9 20:48:08 love kernel: raid5:md0: read error corrected (8
> sectors at 133472 on sda)
> Dec 9 20:48:08 love kernel: raid5:md0: read error corrected (8
> sectors at 133480 on sda)
> Dec 9 20:48:08 love kernel: raid5:md0: read error corrected (8
> sectors at 133488 on sda)
> Dec 9 20:48:08 love kernel: raid5:md0: read error corrected (8
> sectors at 133496 on sda)
>
> Some time during the night, the electricity went away, and on reboot I get this:
>
> raid5: reshape_position too early for auto-recovery - aborting.
Something must be going wrong with the math in raid5:
if (mddev->delta_disks < 0
? (here_new * mddev->new_chunk_sectors <=
here_old * mddev->chunk_sectors)
: (here_new * mddev->new_chunk_sectors >=
here_old * mddev->chunk_sectors)) {
/* Reading from the same stripe as writing to - bad */
printk(KERN_ERR "raid5: reshape_position too early for "
"auto-recovery - aborting.\n");
return -EINVAL;
}
there 'here_new* new_chunk_size' must be over-flowing. So the size of the
array must only just fit into sector_t.
On and arm5 you would need to have CONFIG_LBD set - do you know if it is?
I guess I need to make that code more robust when sector_t doesn't have lots
more bits that the size of the device...
If you can compile your own kernel, you should be able to get it to work
easily. If not ... complain to whoever provided you with a kernel.
NeilBrown
>
> as well as when I try to assemble the array manually. There's nothing
> critical I don't have backed up, but there's a lot of TV on there I
> was planning to watch :).
>
> Any good ideas? I'd sure appreciate some help. I'm guessing this is
> just a crash in the critical section, and without a backup file I'm
> screwed. I'm surprised the backup file is still needed 200gb into the
> reshape though. Thanks!
>
>
> Versions & status:
>
> # cat /proc/mdstat
> Personalities : [raid1] [raid6] [raid5] [raid4]
> md0 : inactive sdg[0] sdj[7] sdi[6] sdf[5] sde[4] sdd[3] sdc[2] sdh[1]
> 3125690368 blocks super 0.91
>
> # uname -a
> Linux love 2.6.32-5-kirkwood #1 Sun Oct 31 11:19:32 UTC 2010 armv5tel GNU/Linux
> # mdadm --version
> mdadm - v3.1.4 - 31st August 2010
>
>
> More details (and --examine of all disks attached):
>
> # mdadm --detail /dev/md0
> /dev/md0:
> Version : 0.91
> Creation Time : Fri Oct 9 09:32:08 2009
> Raid Level : raid6
> Used Dev Size : 390711296 (372.61 GiB 400.09 GB)
> Raid Devices : 8
> Total Devices : 8
> Preferred Minor : 0
> Persistence : Superblock is persistent
>
> Update Time : Fri Dec 10 05:52:35 2010
> State : active, Not Started
> Active Devices : 8
> Working Devices : 8
> Failed Devices : 0
> Spare Devices : 0
>
> Layout : left-symmetric
> Chunk Size : 64K
>
> Delta Devices : 2, (6->8)
>
> UUID : 81ddccd8:5abf5b03:181548d9:47e92625
> Events : 0.1048248
>
> Number Major Minor RaidDevice State
> 0 8 96 0 active sync /dev/sdg
> 1 8 112 1 active sync /dev/sdh
> 2 8 32 2 active sync /dev/sdc
> 3 8 48 3 active sync /dev/sdd
> 4 8 64 4 active sync /dev/sde
> 5 8 80 5 active sync /dev/sdf
> 6 8 128 6 active sync /dev/sdi
> 7 8 144 7 active sync /dev/sdj
>
> --
> Phil
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: Crash during raid6 reshape, now cannot restart?
2010-12-10 20:43 ` Neil Brown
@ 2010-12-10 22:02 ` Neil Brown
2010-12-10 22:11 ` Phil Genera
0 siblings, 1 reply; 5+ messages in thread
From: Neil Brown @ 2010-12-10 22:02 UTC (permalink / raw)
To: Phil Genera; +Cc: linux-raid
On Sat, 11 Dec 2010 07:43:05 +1100 Neil Brown <neilb@suse.de> wrote:
> >
> > raid5: reshape_position too early for auto-recovery - aborting.
>
> Something must be going wrong with the math in raid5:
>
> if (mddev->delta_disks < 0
> ? (here_new * mddev->new_chunk_sectors <=
> here_old * mddev->chunk_sectors)
> : (here_new * mddev->new_chunk_sectors >=
> here_old * mddev->chunk_sectors)) {
> /* Reading from the same stripe as writing to - bad */
> printk(KERN_ERR "raid5: reshape_position too early for "
> "auto-recovery - aborting.\n");
> return -EINVAL;
> }
>
> there 'here_new* new_chunk_size' must be over-flowing. So the size of the
> array must only just fit into sector_t.
> On and arm5 you would need to have CONFIG_LBD set - do you know if it is?
>
> I guess I need to make that code more robust when sector_t doesn't have lots
> more bits that the size of the device...
>
> If you can compile your own kernel, you should be able to get it to work
> easily. If not ... complain to whoever provided you with a kernel.
>
No ... I take that back. here_new is the result of dividing the
reshape_position by chunk_sector times number of disks.
So multiplying by chunk_sectors again is not going to cause an overflow.
So I have no idea what if going on here.... maybe a compiler bug?
If you compile your own kernel, I would put some printk's in
drives/md/raid5.c just before the above code to see what the values of the
variables are, and to see what the results of the multiplications will be.
NeilBrown
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: Crash during raid6 reshape, now cannot restart?
2010-12-10 22:02 ` Neil Brown
@ 2010-12-10 22:11 ` Phil Genera
[not found] ` <AANLkTin=_nTe8RgrOSyhPbcU26EDxJ=Sx177CzB2MD58@mail.gmail.com>
0 siblings, 1 reply; 5+ messages in thread
From: Phil Genera @ 2010-12-10 22:11 UTC (permalink / raw)
To: Neil Brown; +Cc: linux-raid
On Fri, Dec 10, 2010 at 14:02, Neil Brown <neilb@suse.de> wrote:
> On Sat, 11 Dec 2010 07:43:05 +1100 Neil Brown <neilb@suse.de> wrote:
>
>> >
>> > raid5: reshape_position too early for auto-recovery - aborting.
>>
>> Something must be going wrong with the math in raid5:
>>
>> if (mddev->delta_disks < 0
>> ? (here_new * mddev->new_chunk_sectors <=
>> here_old * mddev->chunk_sectors)
>> : (here_new * mddev->new_chunk_sectors >=
>> here_old * mddev->chunk_sectors)) {
>> /* Reading from the same stripe as writing to - bad */
>> printk(KERN_ERR "raid5: reshape_position too early for "
>> "auto-recovery - aborting.\n");
>> return -EINVAL;
>> }
>>
>> there 'here_new* new_chunk_size' must be over-flowing. So the size of the
>> array must only just fit into sector_t.
>> On and arm5 you would need to have CONFIG_LBD set - do you know if it is?
Looking at what I think are the relevant kirkwood kernel config, I
only see CONFIG_LBDAF here:
http://merkel.debian.org/~jurij/
Which upon further investigation is the new name of CONFIG_LBD. So, I
believe so.
>> I guess I need to make that code more robust when sector_t doesn't have lots
>> more bits that the size of the device...
>>
>> If you can compile your own kernel, you should be able to get it to work
>> easily. If not ... complain to whoever provided you with a kernel.
That'd be the nice folks working on debian squeeze.
> No ... I take that back. here_new is the result of dividing the
> reshape_position by chunk_sector times number of disks.
> So multiplying by chunk_sectors again is not going to cause an overflow.
>
> So I have no idea what if going on here.... maybe a compiler bug?
>
> If you compile your own kernel, I would put some printk's in
> drives/md/raid5.c just before the above code to see what the values of the
> variables are, and to see what the results of the multiplications will be.
I'm happy to try my own kernel, but first I think I'll try pulling the
disks and putting them on an x86 machine to see if its actually
platform related. If that works, I can keep the broken array around
for a bit to troubleshoot on. Thanks!
--
Phil
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: Crash during raid6 reshape, now cannot restart?
[not found] ` <AANLkTin=_nTe8RgrOSyhPbcU26EDxJ=Sx177CzB2MD58@mail.gmail.com>
@ 2010-12-12 3:12 ` Phil Genera
0 siblings, 0 replies; 5+ messages in thread
From: Phil Genera @ 2010-12-12 3:12 UTC (permalink / raw)
To: linux-raid
On Sat, Dec 11, 2010 at 11:06, Phil Genera <pg@fivesevenfive.org> wrote:
> On Fri, Dec 10, 2010 at 14:11, Phil Genera <pg@fivesevenfive.org> wrote:
>> I'm happy to try my own kernel, but first I think I'll try pulling the
>> disks and putting them on an x86 machine to see if its actually
>> platform related. If that works, I can keep the broken array around
>> for a bit to troubleshoot on. Thanks!
>
> Gave it a shot on an x86 machine today (with an oddly old mdadm, from
> ubuntu lucid), and mdadm segfault'd. I guess there's no avoiding
> building some kernel modules.
>
> details:
>
> $ mdadm --version
> mdadm - v2.6.7.1 - 15th October 2008
I built mdam 3.1.4 from source, and got the array reshaping again on
x86. I'm still trying to build a new raid456.ko on this tiny arm box
with some extra debugging in it.
--
Phil
^ permalink raw reply [flat|nested] 5+ messages in thread
end of thread, other threads:[~2010-12-12 3:12 UTC | newest]
Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2010-12-10 17:05 Crash during raid6 reshape, now cannot restart? Phil Genera
2010-12-10 20:43 ` Neil Brown
2010-12-10 22:02 ` Neil Brown
2010-12-10 22:11 ` Phil Genera
[not found] ` <AANLkTin=_nTe8RgrOSyhPbcU26EDxJ=Sx177CzB2MD58@mail.gmail.com>
2010-12-12 3:12 ` Phil Genera
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).