linux-raid.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* invalid bitmap file superblock: bad magic
@ 2012-03-08 16:11 Arkadiusz Miśkiewicz
  2012-03-08 16:41 ` Kai Stian Olstad
                   ` (2 more replies)
  0 siblings, 3 replies; 8+ messages in thread
From: Arkadiusz Miśkiewicz @ 2012-03-08 16:11 UTC (permalink / raw)
  To: linux-raid

Hello.

I'm trying to create internal bitmap on freshly created
raid1 array on 3.2.6 kernel.

What's going on and how to fix this problem?

[root@setebos ~]# mdadm /dev/md3 --grow --bitmap=internal                                                                                                   
mdadm: failed to set internal bitmap.
[root@setebos ~]# dmesg|tail -n 16                                                                                                                          
[10455.564598] mdadm: sending ioctl 1261 to a partition!
[10455.564601] mdadm: sending ioctl 1261 to a partition!
[10455.566566] mdadm: sending ioctl 1261 to a partition!
[10455.566570] mdadm: sending ioctl 1261 to a partition!
[10455.568480] md3: invalid bitmap file superblock: bad magic
[10455.568483] md3: bitmap file superblock:
[10455.568485]          magic: 00000000
[10455.568487]        version: 0
[10455.568489]           uuid: 00000000.00000000.00000000.00000000
[10455.568491]         events: 0
[10455.568492] events cleared: 0
[10455.568494]          state: 00000000
[10455.568495]      chunksize: 0 B
[10455.568497]   daemon sleep: 0s
[10455.568498]      sync size: 0 KB
[10455.568499] max write behind: 0
[root@setebos ~]# mdadm --detail /dev/md3
/dev/md3:
        Version : 1.2
  Creation Time : Thu Mar  8 13:54:16 2012
     Raid Level : raid1
     Array Size : 931760807 (888.60 GiB 954.12 GB)
  Used Dev Size : 931760807 (888.60 GiB 954.12 GB)
   Raid Devices : 2
  Total Devices : 2
    Persistence : Superblock is persistent

    Update Time : Thu Mar  8 18:46:11 2012
          State : clean
 Active Devices : 2
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 0

           Name : rescue:3
           UUID : 390e347b:c63b1a0e:1ff4f4a5:bc51f5c6
         Events : 24

    Number   Major   Minor   RaidDevice State
       0       8        4        0      active sync   /dev/sda4
       1       8       20        1      active sync   /dev/sdb4
[root@setebos ~]# cat /proc/mdstat
Personalities : [raid1]
md3 : active raid1 sda4[0] sdb4[1]
      931760807 blocks super 1.2 [2/2] [UU]

md1 : active raid1 sda2[0] sdb2[1]
      995904 blocks [2/2] [UU]

md0 : active raid1 sda1[0] sdb1[1]
      3999117 blocks super 1.2 [2/2] [UU]

md2 : active raid1 sda3[0] sdb3[1]
      40000754 blocks super 1.2 [2/2] [UU]
      bitmap: 1/1 pages [4KB], 65536KB chunk

unused devices: <none>
[root@setebos ~]#


-- 
Arkadiusz Miśkiewicz        PLD/Linux Team
arekm / maven.pl            http://ftp.pld-linux.org/
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 8+ messages in thread
* Invalid bitmap file superblock: bad magic
@ 2014-05-23  3:40 Liu H.T
  0 siblings, 0 replies; 8+ messages in thread
From: Liu H.T @ 2014-05-23  3:40 UTC (permalink / raw)
  To: linux-raid

Hi all,

A RAID5 array of 14 disks failed to be assembled with command mdadm -A
-f -s, but everything is well before system reboot.

Log in /var/log/dmesg
md: md0 stopped.
md: bind<sdq>
md: bind<sdm>
md: bind<sde>
md: bind<sdg>
md: bind<sdc>
md: bind<sdb>
md: bind<sdh>
md: bind<sdf>
md: bind<sdd>
md: bind<sdl>
md: bind<sdk>
md: bind<sdp>
md: bind<sdi>
md: bind<sdn>
md: bind<sdj>
md: bind<sdo>
async_tx: api initialized (async)
xor: automatically using best checksumming function: generic_sse
   generic_sse: 14476.000 MB/sec
xor: using function: generic_sse (14476.000 MB/sec)
raid6: int64x1   3917 MB/s
raid6: int64x2   4234 MB/s
raid6: int64x4   3507 MB/s
raid6: int64x8   2855 MB/s
raid6: sse2x1    8937 MB/s
raid6: sse2x2   10960 MB/s
raid6: sse2x4   13027 MB/s
raid6: using algorithm sse2x4 (13027 MB/s)
md: raid6 personality registered for level 6
md: raid5 personality registered for level 5
md: raid4 personality registered for level 4
raid5: device sdo operational as raid disk 0
raid5: device sdi operational as raid disk 13
raid5: device sdp operational as raid disk 12
raid5: device sdk operational as raid disk 11
raid5: device sdl operational as raid disk 10
raid5: device sdd operational as raid disk 9
raid5: device sdf operational as raid disk 8
raid5: device sdh operational as raid disk 7
raid5: device sdb operational as raid disk 6
raid5: device sdc operational as raid disk 5
raid5: device sdg operational as raid disk 4
raid5: device sde operational as raid disk 3
raid5: device sdm operational as raid disk 2
raid5: device sdq operational as raid disk 1
raid5: allocated 14802kB for md0
0: w=1 pa=0 pr=14 m=1 a=2 r=14 op1=0 op2=0
13: w=2 pa=0 pr=14 m=1 a=2 r=14 op1=0 op2=0
12: w=3 pa=0 pr=14 m=1 a=2 r=14 op1=0 op2=0
11: w=4 pa=0 pr=14 m=1 a=2 r=14 op1=0 op2=0
10: w=5 pa=0 pr=14 m=1 a=2 r=14 op1=0 op2=0
9: w=6 pa=0 pr=14 m=1 a=2 r=14 op1=0 op2=0
8: w=7 pa=0 pr=14 m=1 a=2 r=14 op1=0 op2=0
7: w=8 pa=0 pr=14 m=1 a=2 r=14 op1=0 op2=0
6: w=9 pa=0 pr=14 m=1 a=2 r=14 op1=0 op2=0
5: w=10 pa=0 pr=14 m=1 a=2 r=14 op1=0 op2=0
4: w=11 pa=0 pr=14 m=1 a=2 r=14 op1=0 op2=0
3: w=12 pa=0 pr=14 m=1 a=2 r=14 op1=0 op2=0
2: w=13 pa=0 pr=14 m=1 a=2 r=14 op1=0 op2=0
1: w=14 pa=0 pr=14 m=1 a=2 r=14 op1=0 op2=0
raid5: raid level 5 set md0 active with 14 out of 14 devices, algorithm 2
RAID5 conf printout:
--- rd:14 wd:14
disk 0, o:1, dev:sdo
disk 1, o:1, dev:sdq
disk 2, o:1, dev:sdm
disk 3, o:1, dev:sde
disk 4, o:1, dev:sdg
disk 5, o:1, dev:sdc
disk 6, o:1, dev:sdb
disk 7, o:1, dev:sdh
disk 8, o:1, dev:sdf
disk 9, o:1, dev:sdd
disk 10, o:1, dev:sdl
disk 11, o:1, dev:sdk
disk 12, o:1, dev:sdp
disk 13, o:1, dev:sdi
md0: invalid bitmap file superblock: bad magic
md0: bitmap file superblock:
         magic: fe7dba0d
       version: -365720891
          uuid: 9eab97d7.9147f31f.bdd1efdc.51fc0233
        events: 44011
events cleared: 44011
         state: d11bd8fb
     chunksize: -1058286870 B
  daemon sleep: 5s
     sync size: 78776200384467208 KB
max write behind: 0
md0: failed to create bitmap (-22)


All the disks' bitmap are corrupted.
# mdadm -X /dev/sdm
        Filename : /dev/sdm
           Magic : fe7dba0d
mdadm: invalid bitmap magic 0xfe7dba0d, the bitmap file appears to be corrupted
         Version : -365720891
mdadm: unknown bitmap version -365720891, either the bitmap file is
corrupted or you need to upgrade your tools

# mdadm -X /dev/sdp
        Filename : /dev/sdp
           Magic : fe7dba0d
mdadm: invalid bitmap magic 0xfe7dba0d, the bitmap file appears to be corrupted
         Version : -365720891
mdadm: unknown bitmap version -365720891, either the bitmap file is
corrupted or you need to upgrade your tools


I have encounter this problem twice, but it's difficult to reappear.
Can anyone help me to resolve this problem?Thank you very much.

# uname -r
2.6.32-71.el6.x86_64
# mdadm --version
mdadm - v3.2.2 - 17th June 2011
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2014-05-23  3:40 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2012-03-08 16:11 invalid bitmap file superblock: bad magic Arkadiusz Miśkiewicz
2012-03-08 16:41 ` Kai Stian Olstad
2012-03-08 16:55   ` Arkadiusz Miśkiewicz
2012-03-08 18:12 ` Arkadiusz Miśkiewicz
2012-03-08 20:00 ` NeilBrown
2012-03-08 20:24   ` Arkadiusz Miśkiewicz
2012-03-08 21:04     ` NeilBrown
  -- strict thread matches above, loose matches on Subject: below --
2014-05-23  3:40 Invalid " Liu H.T

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).