* Help - this doesn't look good...
@ 2004-11-25 13:05 David Greaves
2004-11-27 18:22 ` Guy
2004-11-30 2:23 ` Neil Brown
0 siblings, 2 replies; 5+ messages in thread
From: David Greaves @ 2004-11-25 13:05 UTC (permalink / raw)
To: linux-raid
Sigh,
I'm having what might be xfs/nfsd conflicts and thought I'd reboot into
an old 2.6.6 kernel which used to be stable.
Of course it spotted the fd partitions and tried to start the array.
It failed (the old kernel didn't have a driver for the new controller so
some devices were missing)
However when I came back to 2.6.9 I get the rather conflicting status
shown below.
It already mounted (xfs) but I unmounted quite quickly.
Can this do any harm?
Should I leave it to complete?
Can I safely remount?
My worry is that the kernel and mdadm think all the devices are 'up' and
so may write to them and upset the resync (I suspect it thinks /dev/sdf1
is dirty since that wasn't there under 2.6.6)
cu:~# mdadm --detail /dev/md0
/dev/md0:
Version : 00.90.01
Creation Time : Sun Nov 21 21:36:49 2004
Raid Level : raid5
Array Size : 1225543680 (1168.77 GiB 1254.96 GB)
Device Size : 245108736 (233.75 GiB 250.99 GB)
Raid Devices : 6
Total Devices : 7
Preferred Minor : 0
Persistence : Superblock is persistent
Update Time : Thu Nov 25 12:51:46 2004
State : dirty, resyncing
Active Devices : 6
Working Devices : 7
Failed Devices : 0
Spare Devices : 1
Layout : left-symmetric
Chunk Size : 4096K
Rebuild Status : 0% complete
UUID : 44e121b0:6e3422b0:4d67f451:51df5ae0
Events : 0.35500
Number Major Minor RaidDevice State
0 8 1 0 active sync /dev/sda1
1 8 17 1 active sync /dev/sdb1
2 8 33 2 active sync /dev/sdc1
3 8 49 3 active sync /dev/sdd1
4 3 65 4 active sync /dev/hdb1
5 8 81 5 active sync /dev/sdf1
6 8 65 - spare /dev/sde1
cu:~#
cu:~# cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid5] [raid6]
md0 : active raid5 sdf1[5] sde1[6] sdd1[3] sdc1[2] sdb1[1] sda1[0] hdb1[4]
1225543680 blocks level 5, 4096k chunk, algorithm 2 [6/6] [UUUUUU]
[>....................] resync = 0.3% (905600/245108736)
finish=304.3min speed=13369K/sec
unused devices: <none>
^ permalink raw reply [flat|nested] 5+ messages in thread
* RE: Help - this doesn't look good...
2004-11-25 13:05 Help - this doesn't look good David Greaves
@ 2004-11-27 18:22 ` Guy
2004-11-27 19:20 ` David Greaves
2004-11-30 2:23 ` Neil Brown
1 sibling, 1 reply; 5+ messages in thread
From: Guy @ 2004-11-27 18:22 UTC (permalink / raw)
To: 'David Greaves', linux-raid
If your only concern is the re-sync, then no problem.
An array is usable while it is re-syncing.
However, I don't know why it is re-syncing. Maybe the failed attempt to
start the array is at fault. I don't know if this is normal or not.
Guy
-----Original Message-----
From: linux-raid-owner@vger.kernel.org
[mailto:linux-raid-owner@vger.kernel.org] On Behalf Of David Greaves
Sent: Thursday, November 25, 2004 8:05 AM
To: linux-raid@vger.kernel.org
Subject: Help - this doesn't look good...
Sigh,
I'm having what might be xfs/nfsd conflicts and thought I'd reboot into
an old 2.6.6 kernel which used to be stable.
Of course it spotted the fd partitions and tried to start the array.
It failed (the old kernel didn't have a driver for the new controller so
some devices were missing)
However when I came back to 2.6.9 I get the rather conflicting status
shown below.
It already mounted (xfs) but I unmounted quite quickly.
Can this do any harm?
Should I leave it to complete?
Can I safely remount?
My worry is that the kernel and mdadm think all the devices are 'up' and
so may write to them and upset the resync (I suspect it thinks /dev/sdf1
is dirty since that wasn't there under 2.6.6)
cu:~# mdadm --detail /dev/md0
/dev/md0:
Version : 00.90.01
Creation Time : Sun Nov 21 21:36:49 2004
Raid Level : raid5
Array Size : 1225543680 (1168.77 GiB 1254.96 GB)
Device Size : 245108736 (233.75 GiB 250.99 GB)
Raid Devices : 6
Total Devices : 7
Preferred Minor : 0
Persistence : Superblock is persistent
Update Time : Thu Nov 25 12:51:46 2004
State : dirty, resyncing
Active Devices : 6
Working Devices : 7
Failed Devices : 0
Spare Devices : 1
Layout : left-symmetric
Chunk Size : 4096K
Rebuild Status : 0% complete
UUID : 44e121b0:6e3422b0:4d67f451:51df5ae0
Events : 0.35500
Number Major Minor RaidDevice State
0 8 1 0 active sync /dev/sda1
1 8 17 1 active sync /dev/sdb1
2 8 33 2 active sync /dev/sdc1
3 8 49 3 active sync /dev/sdd1
4 3 65 4 active sync /dev/hdb1
5 8 81 5 active sync /dev/sdf1
6 8 65 - spare /dev/sde1
cu:~#
cu:~# cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid5] [raid6]
md0 : active raid5 sdf1[5] sde1[6] sdd1[3] sdc1[2] sdb1[1] sda1[0] hdb1[4]
1225543680 blocks level 5, 4096k chunk, algorithm 2 [6/6] [UUUUUU]
[>....................] resync = 0.3% (905600/245108736)
finish=304.3min speed=13369K/sec
unused devices: <none>
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: Help - this doesn't look good...
2004-11-27 18:22 ` Guy
@ 2004-11-27 19:20 ` David Greaves
0 siblings, 0 replies; 5+ messages in thread
From: David Greaves @ 2004-11-27 19:20 UTC (permalink / raw)
To: Guy; +Cc: linux-raid
Ta Guy
Normally I'd have no concerns using a resyncing array.
However, on the safe/sorry principle - I didn't mount it since I got the
strange conflict from mdadm and indeed mdstat.
As much as anything this is an 'odd behaviour FYI' for Neil
I let it resync and checked the fs - all OK :)
David
Guy wrote:
>If your only concern is the re-sync, then no problem.
>An array is usable while it is re-syncing.
>
>However, I don't know why it is re-syncing. Maybe the failed attempt to
>start the array is at fault. I don't know if this is normal or not.
>
>Guy
>
>-----Original Message-----
>From: linux-raid-owner@vger.kernel.org
>[mailto:linux-raid-owner@vger.kernel.org] On Behalf Of David Greaves
>Sent: Thursday, November 25, 2004 8:05 AM
>To: linux-raid@vger.kernel.org
>Subject: Help - this doesn't look good...
>
>Sigh,
>
>I'm having what might be xfs/nfsd conflicts and thought I'd reboot into
>an old 2.6.6 kernel which used to be stable.
>
>Of course it spotted the fd partitions and tried to start the array.
>It failed (the old kernel didn't have a driver for the new controller so
>some devices were missing)
>
>However when I came back to 2.6.9 I get the rather conflicting status
>shown below.
>
>It already mounted (xfs) but I unmounted quite quickly.
>Can this do any harm?
>
>Should I leave it to complete?
>Can I safely remount?
>
>My worry is that the kernel and mdadm think all the devices are 'up' and
>so may write to them and upset the resync (I suspect it thinks /dev/sdf1
>is dirty since that wasn't there under 2.6.6)
>
>cu:~# mdadm --detail /dev/md0
>/dev/md0:
> Version : 00.90.01
> Creation Time : Sun Nov 21 21:36:49 2004
> Raid Level : raid5
> Array Size : 1225543680 (1168.77 GiB 1254.96 GB)
> Device Size : 245108736 (233.75 GiB 250.99 GB)
> Raid Devices : 6
> Total Devices : 7
>Preferred Minor : 0
> Persistence : Superblock is persistent
>
> Update Time : Thu Nov 25 12:51:46 2004
> State : dirty, resyncing
> Active Devices : 6
>Working Devices : 7
> Failed Devices : 0
> Spare Devices : 1
>
> Layout : left-symmetric
> Chunk Size : 4096K
>
> Rebuild Status : 0% complete
>
> UUID : 44e121b0:6e3422b0:4d67f451:51df5ae0
> Events : 0.35500
>
> Number Major Minor RaidDevice State
> 0 8 1 0 active sync /dev/sda1
> 1 8 17 1 active sync /dev/sdb1
> 2 8 33 2 active sync /dev/sdc1
> 3 8 49 3 active sync /dev/sdd1
> 4 3 65 4 active sync /dev/hdb1
> 5 8 81 5 active sync /dev/sdf1
>
> 6 8 65 - spare /dev/sde1
>cu:~#
>cu:~# cat /proc/mdstat
>Personalities : [linear] [raid0] [raid1] [raid5] [raid6]
>md0 : active raid5 sdf1[5] sde1[6] sdd1[3] sdc1[2] sdb1[1] sda1[0] hdb1[4]
> 1225543680 blocks level 5, 4096k chunk, algorithm 2 [6/6] [UUUUUU]
> [>....................] resync = 0.3% (905600/245108736)
>finish=304.3min speed=13369K/sec
>unused devices: <none>
>
>-
>To unsubscribe from this list: send the line "unsubscribe linux-raid" in
>the body of a message to majordomo@vger.kernel.org
>More majordomo info at http://vger.kernel.org/majordomo-info.html
>
>-
>To unsubscribe from this list: send the line "unsubscribe linux-raid" in
>the body of a message to majordomo@vger.kernel.org
>More majordomo info at http://vger.kernel.org/majordomo-info.html
>
>
>
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: Help - this doesn't look good...
2004-11-25 13:05 Help - this doesn't look good David Greaves
2004-11-27 18:22 ` Guy
@ 2004-11-30 2:23 ` Neil Brown
2004-11-30 9:03 ` David Greaves
1 sibling, 1 reply; 5+ messages in thread
From: Neil Brown @ 2004-11-30 2:23 UTC (permalink / raw)
To: David Greaves; +Cc: linux-raid
On Thursday November 25, david@dgreaves.com wrote:
> Sigh,
>
> I'm having what might be xfs/nfsd conflicts and thought I'd reboot into
> an old 2.6.6 kernel which used to be stable.
>
> Of course it spotted the fd partitions and tried to start the array.
> It failed (the old kernel didn't have a driver for the new controller so
> some devices were missing)
>
> However when I came back to 2.6.9 I get the rather conflicting status
> shown below.
You might need to explain to me what is "conflicting". It looks to be
like the array as 6 working devices and one spare, and that due to an
unclean shutdown it is resyncing the 6 working devices.
NeilBrown
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: Help - this doesn't look good...
2004-11-30 2:23 ` Neil Brown
@ 2004-11-30 9:03 ` David Greaves
0 siblings, 0 replies; 5+ messages in thread
From: David Greaves @ 2004-11-30 9:03 UTC (permalink / raw)
To: Neil Brown; +Cc: linux-raid
Neil Brown wrote:
>On Thursday November 25, david@dgreaves.com wrote:
>
>
>>Sigh,
>>
>>I'm having what might be xfs/nfsd conflicts and thought I'd reboot into
>>an old 2.6.6 kernel which used to be stable.
>>
>>Of course it spotted the fd partitions and tried to start the array.
>>It failed (the old kernel didn't have a driver for the new controller so
>>some devices were missing)
>>
>>However when I came back to 2.6.9 I get the rather conflicting status
>>shown below.
>>
>>
>
>You might need to explain to me what is "conflicting". It looks to be
>like the array as 6 working devices and one spare, and that due to an
>unclean shutdown it is resyncing the 6 working devices.
>
>
That's good - I wasn't aware of that mode of operation.
The only time I thought that a resync occurred was if a drive 'failed'
and was then hot-added. Seeing a resync and no inactive drives looked
inconsistent.
mdadm doesn't indicate which drive is resyncing?
Is there a particular drive at fault or is this more of a sync-check
than a resync?
Maybe one for the docs:
The md raid driver will resync in two situations; a resync occurs when a
drive is hot-added to a degraded array (usually at first assembly and
any time after a drive failure). In this case the array will show the
resyncing drive as inactive until the resync completes and it is
integrated into the array; a resync can also occur after an unclean
shutdown when (presumably) there is an inconsistency in the array
devices. In this case all the elements are still active and the resync
is just checking to ensure no corruption has occurred.
David
^ permalink raw reply [flat|nested] 5+ messages in thread
end of thread, other threads:[~2004-11-30 9:03 UTC | newest]
Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2004-11-25 13:05 Help - this doesn't look good David Greaves
2004-11-27 18:22 ` Guy
2004-11-27 19:20 ` David Greaves
2004-11-30 2:23 ` Neil Brown
2004-11-30 9:03 ` David Greaves
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).