linux-raid.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* Antw: Problems creating MD-RAID1: "device .. not suitable for any style of raid array" / "Device or resource busy"
@ 2011-05-11 14:38 Ulrich Windl
  2011-05-11 17:00 ` John Robinson
  0 siblings, 1 reply; 4+ messages in thread
From: Ulrich Windl @ 2011-05-11 14:38 UTC (permalink / raw)
  To: linux-raid

Hi!

I could reduce the problem to "--bitmap internal": It works if I leave it out. Unfortunately the RAID will fully resync every assembly when there was an unclean shutdown.
The thing that kept one device busy was MD-RAID itself: "mdadm -- stop" released the device.

So to summarize:
1) mdadm fails to set up an internal bitmap
2) Even when mdadm fails to do 1), it starts the array

Regards,
Ulrich


>>> Ulrich Windl schrieb am 06.05.2011 um 14:15 in Nachricht <4DC3E67F.8BA : 161 :
60728>:
> Hello!
> 
> I'm having strange trouble with SLES11 SP1 amd MD RAID1 (mdadm - v3.0.3 
> (mdadm-3.0.3-0.22.4), 2.6.32.36-0.5-xen):
> 
> I was able to create one RAID1 array, but not a second one. I have no idea 
> what's wrong, but my guesses are:
> 
> 1) An error when using "--bitmap internal" for a 30GB disk
> 2) Unsure whether the disks needs an msdos signature or a RAID partition
> 3) It seems a failed attempt to create the array keeps one device busy (a 
> reboot(!) resolves that problem for one attempt)
> 
> Some output:
> # mdadm -C -l1 -n2 --bitmap internal /dev/md0 /dev/xvdd /dev/xvde
> mdadm: device /dev/xvdd not suitable for any style of array
> (Reboot)
> # mdadm -C -l1 -n2  /dev/md0 /dev/xvdd /dev/xvde
> mdadm: /dev/xvdd appears to be part of a raid array:
>     level=raid1 devices=2 ctime=Wed May  4 12:00:12 2011
> Continue creating array? y
> mdadm: array /dev/md0 started.
> 
> # mdadm --grow --bitmap internal /dev/md0
> mdadm: failed to set internal bitmap.
> # mdadm --stop /dev/md0
> mdadm: stopped /dev/md0
> # mdadm -C -l1 -n2 --bitmap internal /dev/md0 /dev/xvdd /dev/xvde
> mdadm: /dev/xvdd appears to be part of a raid array:
>     level=raid1 devices=2 ctime=Wed May  4 12:38:20 2011
> mdadm: /dev/xvde appears to be part of a raid array:
>     level=raid1 devices=2 ctime=Wed May  4 12:38:20 2011
> Continue creating array? y
> mdadm: failed to write superblock to /dev/xvdd
> mdadm: ADD_NEW_DISK for /dev/xvde failed: Device or resource busy
> # mdadm -C -l1 -n2 --bitmap internal /dev/md0 /dev/xvdd /dev/xvde
> mdadm: device /dev/xvdd not suitable for any style of array
> # xm reboot rksapv01
> # mdadm -C -l1 -n2 --bitmap internal /dev/md0 /dev/xvdd /dev/xvde
> mdadm: /dev/xvdd appears to be part of a raid array:
>     level=raid1 devices=2 ctime=Wed May  4 13:26:55 2011
> mdadm: /dev/xvde appears to be part of a raid array:
>     level=raid1 devices=2 ctime=Wed May  4 12:38:20 2011
> Continue creating array? y
> mdadm: failed to write superblock to /dev/xvdd
> mdadm: ADD_NEW_DISK for /dev/xvde failed: Device or resource busy
> 
> # mdadm -C -l1 -n2 --bitmap internal /dev/md0 /dev/xvdd /dev/xvde   
> mdadm: /dev/xvdd appears to be part of a raid array:
>     level=raid1 devices=2 ctime=Wed May  4 13:37:28 2011
> mdadm: /dev/xvde appears to be part of a raid array:
>     level=raid1 devices=2 ctime=Wed May  4 12:38:20 2011
> Continue creating array? y
> mdadm: failed to write superblock to /dev/xvdd
> mdadm: ADD_NEW_DISK for /dev/xvde failed: Device or resource busy
> # mdadm -C -l1 -n2 --bitmap internal /dev/md0 /dev/xvdd /dev/xvde
> mdadm: device /dev/xvdd not suitable for any style of array
> 
> (At this point I filed a service request for SLES with no result until now)
> 
> Trying some other disks (of varying size):
> # mdadm -C -l1 -n2 --bitmap internal /dev/md0 /dev/xvdd /dev/xvde   
> mdadm: /dev/xvdd appears to be part of a raid array:
>     level=raid1 devices=2 ctime=Wed May  4 13:37:28 2011
> mdadm: /dev/xvde appears to be part of a raid array:
>     level=raid1 devices=2 ctime=Wed May  4 12:38:20 2011
> Continue creating array? y
> mdadm: failed to write superblock to /dev/xvdd
> mdadm: ADD_NEW_DISK for /dev/xvde failed: Device or resource busy
> # mdadm -C -l1 -n2 --bitmap internal /dev/md0 /dev/xvdd /dev/xvde
> mdadm: device /dev/xvdd not suitable for any style of array
> 
> (another Reboot)
> # mdadm -C -l1 -n2 --bitmap internal /dev/md1 /dev/xvdf /dev/xvdg
> mdadm: /dev/xvdf appears to be part of a raid array:
>     level=raid1 devices=2 ctime=Wed May  4 16:36:59 2011
> Continue creating array? y
> mdadm: failed to write superblock to /dev/xvdf
> mdadm: ADD_NEW_DISK for /dev/xvdg failed: Device or resource busy
> # mdadm -C -l1 -n2 --bitmap internal /dev/md2 /dev/xvdh /dev/xvdi
> mdadm: /dev/xvdh appears to be part of a raid array:
>     level=raid1 devices=2 ctime=Wed May  4 16:37:26 2011
> Continue creating array? y
> mdadm: failed to write superblock to /dev/xvdh
> mdadm: ADD_NEW_DISK for /dev/xvdi failed: Device or resource busy
> # mdadm -C -l1 -n2 --bitmap internal /dev/md3 /dev/xvdj /dev/xvdk
> mdadm: /dev/xvdj appears to be part of a raid array:
>     level=raid1 devices=2 ctime=Wed May  4 16:37:38 2011
> Continue creating array? y
> mdadm: failed to write superblock to /dev/xvdj
> mdadm: ADD_NEW_DISK for /dev/xvdk failed: Device or resource busy
> 
> Corresponding Syslog messages:
> May  4 17:18:54 rksapv01 kernel: [  231.942241] md: bind<xvdf>
> May  4 17:18:54 rksapv01 kernel: [  231.942265] md: could not bd_claim xvdg.
> May  4 17:18:54 rksapv01 kernel: [  231.942269] md: md_import_device 
> returned
> -16
> May  4 17:19:13 rksapv01 kernel: [  250.118561] md: bind<xvdh>
> May  4 17:19:13 rksapv01 kernel: [  250.118586] md: could not bd_claim xvdi.
> May  4 17:19:13 rksapv01 kernel: [  250.118590] md: md_import_device 
> returned
> -16
> May  4 17:19:27 rksapv01 kernel: [  264.505337] md: bind<xvdj>
> May  4 17:19:27 rksapv01 kernel: [  264.505365] md: could not bd_claim xvdk.
> May  4 17:19:27 rksapv01 kernel: [  264.505368] md: md_import_device 
> returned
> -16
> 
> Do you understand that I'm quite frustrated? Maybe I should mention that the 
> disks are from a FC-SAN with a 4-way multipath 
> (multipath-tools-0.4.8-40.25.1). Also "lsof" finds no process that has the 
> device open.
> 
> Finally: I had a similar problem with SLES10 SP3 which made me quit using 
> MD-RAID about two years ago...
> 
> Regards,
> Ulrich
> P.S: I'm not subscribed to the list, so please CC: -- Thank you
> 
> 


 

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: Antw: Problems creating MD-RAID1: "device .. not suitable for any style of raid array" / "Device or resource busy"
  2011-05-11 14:38 Antw: Problems creating MD-RAID1: "device .. not suitable for any style of raid array" / "Device or resource busy" Ulrich Windl
@ 2011-05-11 17:00 ` John Robinson
  2011-05-12  6:07   ` Ulrich Windl
  2011-05-12 10:38   ` Ulrich Windl
  0 siblings, 2 replies; 4+ messages in thread
From: John Robinson @ 2011-05-11 17:00 UTC (permalink / raw)
  To: Ulrich Windl; +Cc: linux-raid

On 11/05/2011 15:38, Ulrich Windl wrote:
> Hi!
>
> I could reduce the problem to "--bitmap internal": It works if I leave it out. Unfortunately the RAID will fully resync every assembly when there was an unclean shutdown.
> The thing that kept one device busy was MD-RAID itself: "mdadm -- stop" released the device.
>
> So to summarize:
> 1) mdadm fails to set up an internal bitmap
> 2) Even when mdadm fails to do 1), it starts the array

OK. First up, the syntax is "--bitmap=internal" - perhaps you have a 
mdadm which is partially parsing what you've said but not got it 100% 
right. If using the correct syntax doesn't work, instead try creating 
the array without the bitmap, then adding one with
   mdadm --grow /dev/mdX --bitmap=internal

Cheers,

John.


^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: Antw: Problems creating MD-RAID1: "device .. not suitable for any style of raid array" / "Device or resource busy"
  2011-05-11 17:00 ` John Robinson
@ 2011-05-12  6:07   ` Ulrich Windl
  2011-05-12 10:38   ` Ulrich Windl
  1 sibling, 0 replies; 4+ messages in thread
From: Ulrich Windl @ 2011-05-12  6:07 UTC (permalink / raw)
  To: John Robinson; +Cc: linux-raid

>>> John Robinson <john.robinson@anonymous.org.uk> schrieb am 11.05.2011 um 19:00
in Nachricht <4DCAC0B3.8010509@anonymous.org.uk>:
> On 11/05/2011 15:38, Ulrich Windl wrote:
> > Hi!
> >
> > I could reduce the problem to "--bitmap internal": It works if I leave it 
> out. Unfortunately the RAID will fully resync every assembly when there was 
> an unclean shutdown.
> > The thing that kept one device busy was MD-RAID itself: "mdadm -- stop" 
> released the device.
> >
> > So to summarize:
> > 1) mdadm fails to set up an internal bitmap
> > 2) Even when mdadm fails to do 1), it starts the array
> 
> OK. First up, the syntax is "--bitmap=internal" - perhaps you have a 
> mdadm which is partially parsing what you've said but not got it 100% 
> right. If using the correct syntax doesn't work, instead try creating 
> the array without the bitmap, then adding one with
>    mdadm --grow /dev/mdX --bitmap=internal

Hi John!

Thanks for that. Of course (using my syntax) the "--grow" variant also did not work. I would also expect that mdadm can either handle "--bitmap internal" as a variant of "--bitmap=internal" or emit a syntax error. Unfortunately I don't have the time right now to inspect the sources.

Regards,
Ulrich
P.S. Amazing things happen now and then:
# cat /proc/mdstat
Personalities : [linear] [raid1]
md2 : active raid1 dm-17[1] dm-10[0]
      209715136 blocks [2/2] [UU]

md1 : active raid1 dm-12[0] dm-14[1]
      20971456 blocks [2/2] [UU]

md3 : active raid1 dm-16[0] dm-9[1]
      31457216 blocks [2/2] [UU]

md0 : active raid1 dm-11[0] dm-8[1]
      525336512 blocks [2/2] [UU]

unused devices: <none>
# mdadm --grow --bitmap=internal /dev/md0
# mdadm --grow --bitmap internal /dev/md1
# mdadm --grow --bitmap internal /dev/md2
# mdadm --grow --bitmap=internal /dev/md3
mdadm: failed to set internal bitmap.
# cat /proc/mdstat
Personalities : [linear] [raid1]
md2 : active raid1 dm-17[1] dm-10[0]
      209715136 blocks [2/2] [UU]
      bitmap: 0/200 pages [0KB], 512KB chunk

md1 : active raid1 dm-12[0] dm-14[1]
      20971456 blocks [2/2] [UU]
      bitmap: 0/160 pages [0KB], 64KB chunk

md3 : active raid1 dm-16[0] dm-9[1]
      31457216 blocks [2/2] [UU]

md0 : active raid1 dm-11[0] dm-8[1]
      525336512 blocks [2/2] [UU]
      bitmap: 0/126 pages [0KB], 2048KB chunk

unused devices: <none>



^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: Antw: Problems creating MD-RAID1: "device .. not suitable for any style of raid array" / "Device or resource busy"
  2011-05-11 17:00 ` John Robinson
  2011-05-12  6:07   ` Ulrich Windl
@ 2011-05-12 10:38   ` Ulrich Windl
  1 sibling, 0 replies; 4+ messages in thread
From: Ulrich Windl @ 2011-05-12 10:38 UTC (permalink / raw)
  To: John Robinson; +Cc: linux-raid

>>> Ulrich Windl schrieb am 12.05.2011 um 08:07 in Nachricht <4DCB793A.C19 : 161 :
60728>:

The problem seems to be a mis-calculated bitmap size or position, indeed:

May 12 08:06:00 hostname kernel: [55956.959794] md3: bitmap file is out of date
(0 < 70) -- forcing full recovery
May 12 08:06:00 hostname kernel: [55956.959798] md3: bitmap file is out of
date, doing full recovery
May 12 08:06:00 hostname kernel: [55957.054583] attempt to access beyond end of
device
May 12 08:06:00 hostname kernel: [55957.054585] dm-16: rw=192, want=62914561,
limit=62914560
May 12 08:06:00 hostname kernel: [55957.054587] attempt to access beyond end of
device
May 12 08:06:00 hostname kernel: [55957.054588] dm-9: rw=192, want=62914561,
limit=62914560
May 12 08:06:00 hostname kernel: [55957.054589] md3: bitmap initialisation
failed: -5
May 12 08:06:00 hostname kernel: [55957.055058] md: couldn't update array info.
-5

[...]
> P.S. Amazing things happen now and then:
> # cat /proc/mdstat
> Personalities : [linear] [raid1]
> md2 : active raid1 dm-17[1] dm-10[0]
>       209715136 blocks [2/2] [UU]
> 
> md1 : active raid1 dm-12[0] dm-14[1]
>       20971456 blocks [2/2] [UU]
> 
> md3 : active raid1 dm-16[0] dm-9[1]
>       31457216 blocks [2/2] [UU]
> 
> md0 : active raid1 dm-11[0] dm-8[1]
>       525336512 blocks [2/2] [UU]
> 
> unused devices: <none>
> # mdadm --grow --bitmap=internal /dev/md0
> # mdadm --grow --bitmap internal /dev/md1
> # mdadm --grow --bitmap internal /dev/md2
> # mdadm --grow --bitmap=internal /dev/md3
> mdadm: failed to set internal bitmap.
> # cat /proc/mdstat
> Personalities : [linear] [raid1]
> md2 : active raid1 dm-17[1] dm-10[0]
>       209715136 blocks [2/2] [UU]
>       bitmap: 0/200 pages [0KB], 512KB chunk
> 
> md1 : active raid1 dm-12[0] dm-14[1]
>       20971456 blocks [2/2] [UU]
>       bitmap: 0/160 pages [0KB], 64KB chunk
> 
> md3 : active raid1 dm-16[0] dm-9[1]
>       31457216 blocks [2/2] [UU]
> 
> md0 : active raid1 dm-11[0] dm-8[1]
>       525336512 blocks [2/2] [UU]
>       bitmap: 0/126 pages [0KB], 2048KB chunk
> 
> unused devices: <none>
> 
> 


 


^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2011-05-12 10:38 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2011-05-11 14:38 Antw: Problems creating MD-RAID1: "device .. not suitable for any style of raid array" / "Device or resource busy" Ulrich Windl
2011-05-11 17:00 ` John Robinson
2011-05-12  6:07   ` Ulrich Windl
2011-05-12 10:38   ` Ulrich Windl

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).