* Raid1 problem can't add remove or mark faulty -- it did work
@ 2005-03-27 2:36 rrk
2005-03-27 7:26 ` Neil Brown
0 siblings, 1 reply; 5+ messages in thread
From: rrk @ 2005-03-27 2:36 UTC (permalink / raw)
To: linux-raid
i have a strange problem -- can't get a fully funtional 2 drive raid
back up running-- it may or may not be a drive bios interaction don't
know. none of the mdadm manage functions will work add remove or mark faulty
i have purged and reinstalled the mdadm package twice.
below is all the info i could think of . the kernel is 2.6.10
patched -- stock kanotix
either drive will boot and the behavior is the same no matter which one
is active
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
current installed version of mdadm
mdadm:
Installed: 1.9.0-2
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
hda partition table
Disk geometry for /dev/hda: 0.000-152627.835 megabytes
Disk label type: msdos
Minor Start End Type Filesystem Flags
1 0.031 151629.125 primary ext3 raid
2 151629.126 152625.344 primary linux-swap
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
hdc partition table
Disk geometry for /dev/hdc: 0.000-152627.835 megabytes
Disk label type: msdos
Minor Start End Type Filesystem Flags
1 0.031 151629.125 primary ext3 raid
2 151629.126 152625.344 primary
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
examine /dev/hda1
root@crm_svr:/home/rob# mdadm -E /dev/hda1
/dev/hda1:
Magic : a92b4efc
Version : 00.90.01
UUID : bcbf4556:4aba6daf:5661e4a2:32fcc1db
Creation Time : Mon Mar 21 17:22:34 2005
Raid Level : raid1
Raid Devices : 2
Total Devices : 1
Preferred Minor : 0
Update Time : Sat Mar 26 19:33:08 2005
State : clean
Active Devices : 1
Working Devices : 1
Failed Devices : 1
Spare Devices : 0
Checksum : c7caacc2 - correct
Events : 0.17321
Number Major Minor RaidDevice State
this 0 3 1 0 active sync /dev/hda1
0 0 3 1 0 active sync /dev/hda1
1 1 0 0 1 faulty removed
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
examine /dev/hdc1
root@crm_svr:/home/rob# mdadm -E /dev/hdc1
/dev/hdc1:
Magic : a92b4efc
Version : 00.90.01
UUID : bcbf4556:4aba6daf:5661e4a2:32fcc1db
Creation Time : Mon Mar 21 17:22:34 2005
Raid Level : raid1
Raid Devices : 2
Total Devices : 1
Preferred Minor : 0
Update Time : Sat Mar 26 20:23:04 2005
State : clean
Active Devices : 1
Working Devices : 1
Failed Devices : 0
Spare Devices : 0
Checksum : c7cabae4 - correct
Events : 0.17613
Number Major Minor RaidDevice State
this 1 22 1 1 active sync /dev/hdc1
0 0 0 0 0 removed
1 1 22 1 1 active sync /dev/hdc1
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
what happens when i try a hot add remove or set faulty
root@crm_svr:/home/rob# mdadm /dev/md0 -a /dev/hda1
mdadm: hot add failed for /dev/hda1: Invalid argument
root@crm_svr:/home/rob# mdadm /dev/md0 -a /dev/hdc1
mdadm: hot add failed for /dev/hdc1: Invalid argument
root@crm_svr:/home/rob# mdadm /dev/md0 -r /dev/hda1
mdadm: hot remove failed for /dev/hda1: No such device or address
root@crm_svr:/home/rob# mdadm /dev/md0 -r /dev/hdc1
mdadm: hot remove failed for /dev/hdc1: Device or resource busy
root@crm_svr:/home/rob# mdadm /dev/md0 -f /dev/hda1
mdadm: set device faulty failed for /dev/hda1: No such device
root@crm_svr:/home/rob# mdadm /dev/md0 -f /dev/hdc1
mdadm: set /dev/hdc1 faulty in /dev/md0
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
boot info from dmesg
Probing IDE interface ide0...
hda: WDC WD1600JB-00GVA0, ATA DISK drive
hdb: SAMSUNG CD-ROM SH-152A, ATAPI CD/DVD-ROM drive
ide0 at 0x1f0-0x1f7,0x3f6 on irq 14
Probing IDE interface ide1...
hdc: WDC WD1600JB-00GVA0, ATA DISK drive
ide1 at 0x170-0x177,0x376 on irq 15
hda: max request size: 1024KiB
hda: 312581808 sectors (160041 MB) w/8192KiB Cache, CHS=19457/255/63,
UDMA(100)
hda: cache flushes supported
hda: hda1 hda2
hdc: max request size: 1024KiB
hdc: 312581808 sectors (160041 MB) w/8192KiB Cache, CHS=19457/255/63,
UDMA(100)
hdc: cache flushes supported
hdc: hdc1 hdc2
hdb: ATAPI 52X CD-ROM drive, 128kB Cache, UDMA(33)
Uniform CD-ROM driver Revision: 3.20
md: linear personality registered as nr 1
md: raid0 personality registered as nr 2
md: raid1 personality registered as nr 3
md: raid10 personality registered as nr 9
md: raid5 personality registered as nr 4
raid5: automatically using best checksumming function: pIII_sse
pIII_sse : 1448.000 MB/sec
raid5: using function: pIII_sse (1448.000 MB/sec)
raid6: int32x1 621 MB/s
raid6: int32x2 777 MB/s
raid6: int32x4 468 MB/s
raid6: int32x8 457 MB/s
raid6: mmxx1 1363 MB/s
raid6: mmxx2 2484 MB/s
raid6: sse1x1 1246 MB/s
raid6: sse1x2 2140 MB/s
raid6: using algorithm sse1x2 (2140 MB/s)
md: raid6 personality registered as nr 8
md: multipath personality registered as nr 7
md: md driver 0.90.1 MAX_MD_DEVS=256, MD_SB_DISKS=27
device-mapper: 4.3.0-ioctl (2004-09-30) initialised: dm-devel@redhat.com
md: Autodetecting RAID arrays.
md: autorun ...
md: considering hdc1 ...
md: adding hdc1 ...
md: adding hda1 ...
md: created md0
md: bind<hda1>
md: bind<hdc1>
md: running: <hdc1><hda1>
md: kicking non-fresh hda1 from array!
md: unbind<hda1>
md: export_rdev(hda1)
raid1: raid set md0 active with 1 out of 2 mirrors
md: ... autorun DONE.
EXT3-fs: mounted filesystem with ordered data mode.
VFS: Mounted root (ext3 filesystem) readonly.
Freeing unused kernel memory: 276k freed
kjournald starting. Commit interval 5 seconds
Warning: /proc/ide/hd?/settings interface is obsolete, and will be
removed soon!
EXT3 FS on md0, internal journal
Adding 1020116k swap on /dev/hda2. Priority:-1 extents:1
EXT3 FS on md0, internal journal
kjournald starting. Commit interval 5 seconds
EXT3 FS on hda1, internal journal
EXT3-fs: mounted filesystem with ordered data mode.
EFS: 1.0a - http://aeschi.ch.eu.org/efs/
reiser4[mount(1122)]: _init_read_super
(fs/reiser4/init_super.c:198)[nikita-2608]:
WARNING: hdc2: wrong master super block magic.
NTFS driver 2.1.22 [Flags: R/W MODULE].
^ permalink raw reply [flat|nested] 5+ messages in thread* Re: Raid1 problem can't add remove or mark faulty -- it did work
2005-03-27 2:36 Raid1 problem can't add remove or mark faulty -- it did work rrk
@ 2005-03-27 7:26 ` Neil Brown
0 siblings, 0 replies; 5+ messages in thread
From: Neil Brown @ 2005-03-27 7:26 UTC (permalink / raw)
To: rrk; +Cc: linux-raid
On Saturday March 26, rrk@prairie.lakes.com wrote:
> i have a strange problem -- can't get a fully funtional 2 drive raid
> back up running-- it may or may not be a drive bios interaction don't
> know. none of the mdadm manage functions will work add remove or mark faulty
> i have purged and reinstalled the mdadm package twice.
>
> below is all the info i could think of . the kernel is 2.6.10
> patched -- stock kanotix
> either drive will boot and the behavior is the same no matter which one
> is active
For future reference, extra information which would be helpful
includes
cat /proc/mdstat
mdadm -D /dev/md0
and any 'dmesg' messages that are generated when 'mdadm' fails.
It appears that hda1 and hdc1 are parts of the raid1, and hdc1 is the
'freshest' part so when you boot, the array is assembled with just one
drive: hdc1. hda1 is not included because it appears to be out of
date, and so presumably failed at some time.
The thing that should be done is to add hda1 with
mdadm /dev/md0 -a /dev/hda1
It appears that you tried this and it failed. When it failed there
should have been kernel messages generated. I need to see these.
> what happens when i try a hot add remove or set faulty
>
> root@crm_svr:/home/rob# mdadm /dev/md0 -a /dev/hda1
> mdadm: hot add failed for /dev/hda1: Invalid argument
This should have worked, but didn't. The kernel messages should
indicate why.
> root@crm_svr:/home/rob# mdadm /dev/md0 -a /dev/hdc1
> mdadm: hot add failed for /dev/hdc1: Invalid argument
>
hdc1 is already part of md0. Adding it is meaningless.
> root@crm_svr:/home/rob# mdadm /dev/md0 -r /dev/hda1
> mdadm: hot remove failed for /dev/hda1: No such device or address
You cannot remove hda1 because it isn't part of the array.
> root@crm_svr:/home/rob# mdadm /dev/md0 -r /dev/hdc1
> mdadm: hot remove failed for /dev/hdc1: Device or resource busy
>
You cannot remove hdc1 because it is actively in use in the array.
You can only removed failed drives or spares.
> root@crm_svr:/home/rob# mdadm /dev/md0 -f /dev/hda1
> mdadm: set device faulty failed for /dev/hda1: No such device
You cannot fail hda1 because it isn't part of md0.
> root@crm_svr:/home/rob# mdadm /dev/md0 -f /dev/hdc1
> mdadm: set /dev/hdc1 faulty in /dev/md0
Ooops... you just failed the only drive in the raid1 array, md0 will
no longer be functional... until you reboot and the array gets
re-assembled. Failing a drive does not write anything to it, so you
won't have hurt any drive by doing this, just made the array stop
working for now.
NeilBrown
^ permalink raw reply [flat|nested] 5+ messages in thread
* Raid1 problem can't add remove or mark faulty -- it did work
@ 2005-03-27 18:38 rrk
2005-03-27 19:03 ` Guy
0 siblings, 1 reply; 5+ messages in thread
From: rrk @ 2005-03-27 18:38 UTC (permalink / raw)
To: linux-raid
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
mdstat after reboot
root@crm_svr:~# cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid5] [multipath] [raid6]
[raid10]
md0 : active raid1 hdc1[1]
155268096 blocks [2/1] [_U]
unused devices: <none>
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
the below error shows after a boot when excuted in kde konsole
root@crm_svr:/home/rob# mdadm /dev/md0 -a /dev/hda1
mdadm: hot add failed for /dev/hda1: Invalid argument
but this error is shown from a tty console
md: could not bd_claim hda1
md: error, md_import_device() returned -16
mdadm: hot add failed for /dev/hda1 : invalid argument
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
this is what shows in dmesg
md: could not bd_claim hda1.
md: error, md_import_device() returned -16
md: could not bd_claim hda1.
md: error, md_import_device() returned -16
md: could not bd_claim hda1.
md: error, md_import_device() returned -16
i had this bd_claim error when i first tried to build the array
but it went away when i wiped every thing and started over
this box is the only time i have seen this error and i
tried using different drives -- but of the same type -- when
the problem showed up on the initial build
^ permalink raw reply [flat|nested] 5+ messages in thread* RE: Raid1 problem can't add remove or mark faulty -- it did work
2005-03-27 18:38 rrk
@ 2005-03-27 19:03 ` Guy
2005-03-27 19:30 ` rrk
0 siblings, 1 reply; 5+ messages in thread
From: Guy @ 2005-03-27 19:03 UTC (permalink / raw)
To: 'rrk', linux-raid
I am not sure what bd_claim is, but it is somewhat like open(). My guess is
your disk is in use, maybe nounted. Run this command and send the output.
df
Guy
> -----Original Message-----
> From: linux-raid-owner@vger.kernel.org [mailto:linux-raid-
> owner@vger.kernel.org] On Behalf Of rrk
> Sent: Sunday, March 27, 2005 1:38 PM
> To: linux-raid@vger.kernel.org
> Subject: Raid1 problem can't add remove or mark faulty -- it did work
>
> @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
> mdstat after reboot
>
> root@crm_svr:~# cat /proc/mdstat
> Personalities : [linear] [raid0] [raid1] [raid5] [multipath] [raid6]
> [raid10]
> md0 : active raid1 hdc1[1]
> 155268096 blocks [2/1] [_U]
>
> unused devices: <none>
>
> @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
> the below error shows after a boot when excuted in kde konsole
>
> root@crm_svr:/home/rob# mdadm /dev/md0 -a /dev/hda1
> mdadm: hot add failed for /dev/hda1: Invalid argument
>
> but this error is shown from a tty console
>
> md: could not bd_claim hda1
> md: error, md_import_device() returned -16
> mdadm: hot add failed for /dev/hda1 : invalid argument
>
> @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
> this is what shows in dmesg
>
> md: could not bd_claim hda1.
> md: error, md_import_device() returned -16
> md: could not bd_claim hda1.
> md: error, md_import_device() returned -16
> md: could not bd_claim hda1.
> md: error, md_import_device() returned -16
>
> i had this bd_claim error when i first tried to build the array
> but it went away when i wiped every thing and started over
> this box is the only time i have seen this error and i
> tried using different drives -- but of the same type -- when
> the problem showed up on the initial build
>
>
>
>
> -
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: Raid1 problem can't add remove or mark faulty -- it did work
2005-03-27 19:03 ` Guy
@ 2005-03-27 19:30 ` rrk
0 siblings, 0 replies; 5+ messages in thread
From: rrk @ 2005-03-27 19:30 UTC (permalink / raw)
To: linux-raid
Guy wrote:
>I am not sure what bd_claim is, but it is somewhat like open(). My guess is
>your disk is in use, maybe nounted. Run this command and send the output.
>
>df
>
>Guy
>
>
>
yep that was it -- there is a whizzing wizard somewhere it is not
fstab-- though i checked that
sorry for the trouble and thanks
rob
>
>
>>-----Original Message-----
>>From: linux-raid-owner@vger.kernel.org [mailto:linux-raid-
>>owner@vger.kernel.org] On Behalf Of rrk
>>Sent: Sunday, March 27, 2005 1:38 PM
>>To: linux-raid@vger.kernel.org
>>Subject: Raid1 problem can't add remove or mark faulty -- it did work
>>
>>@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
>>mdstat after reboot
>>
>>root@crm_svr:~# cat /proc/mdstat
>>Personalities : [linear] [raid0] [raid1] [raid5] [multipath] [raid6]
>>[raid10]
>>md0 : active raid1 hdc1[1]
>> 155268096 blocks [2/1] [_U]
>>
>>unused devices: <none>
>>
>>@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
>>the below error shows after a boot when excuted in kde konsole
>>
>>root@crm_svr:/home/rob# mdadm /dev/md0 -a /dev/hda1
>>mdadm: hot add failed for /dev/hda1: Invalid argument
>>
>>but this error is shown from a tty console
>>
>>md: could not bd_claim hda1
>>md: error, md_import_device() returned -16
>>mdadm: hot add failed for /dev/hda1 : invalid argument
>>
>>@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
>>this is what shows in dmesg
>>
>>md: could not bd_claim hda1.
>>md: error, md_import_device() returned -16
>>md: could not bd_claim hda1.
>>md: error, md_import_device() returned -16
>>md: could not bd_claim hda1.
>>md: error, md_import_device() returned -16
>>
>>i had this bd_claim error when i first tried to build the array
>>but it went away when i wiped every thing and started over
>>this box is the only time i have seen this error and i
>>tried using different drives -- but of the same type -- when
>>the problem showed up on the initial build
>>
>>
>>
>>
>>-
>>To unsubscribe from this list: send the line "unsubscribe linux-raid" in
>>the body of a message to majordomo@vger.kernel.org
>>More majordomo info at http://vger.kernel.org/majordomo-info.html
>>
>>
>
>-
>To unsubscribe from this list: send the line "unsubscribe linux-raid" in
>the body of a message to majordomo@vger.kernel.org
>More majordomo info at http://vger.kernel.org/majordomo-info.html
>
>
>
>
^ permalink raw reply [flat|nested] 5+ messages in thread
end of thread, other threads:[~2005-03-27 19:30 UTC | newest]
Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2005-03-27 2:36 Raid1 problem can't add remove or mark faulty -- it did work rrk
2005-03-27 7:26 ` Neil Brown
-- strict thread matches above, loose matches on Subject: below --
2005-03-27 18:38 rrk
2005-03-27 19:03 ` Guy
2005-03-27 19:30 ` rrk
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).