linux-raid.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* mdadm -X bitmap status off by 2^16
@ 2006-07-18  6:46 Janos Farkas
  2006-07-18 15:30 ` Paul Clements
  0 siblings, 1 reply; 3+ messages in thread
From: Janos Farkas @ 2006-07-18  6:46 UTC (permalink / raw)
  To: linux-raid

Hi!

Another pseudo-problem :) I've just set up a RAID5 array by creating a
three-disk one from two disks, and later adding the third.  Everything
seems normal, but the mdadm (2.5.2) -X output:

        Filename : /dev/hda3
           Magic : 6d746962
         Version : 4
            UUID : 293ceee6.d1811fb1.a8b316e6.b54abcc7
          Events : 12
  Events Cleared : 12
           State : OK
       Chunksize : 1 MB
          Daemon : 5s flush period
      Write Mode : Normal
       Sync Size : 292784512 (279.22 GiB 299.81 GB)
          Bitmap : 285923 bits (chunks), 65536 dirty (22.9%)

# for i in hdb3 hdd3 hda3 ; mdadm -X /dev/$i|grep map
          Bitmap : 285923 bits (chunks), 0 dirty (0.0%)
          Bitmap : 285923 bits (chunks), 0 dirty (0.0%)
          Bitmap : 285923 bits (chunks), 65536 dirty (22.9%)
# for i in hdb3 hdd3 hda3 ; mdadm -X /dev/$i|grep map
          Bitmap : 285923 bits (chunks), 1 dirty (0.0%)
          Bitmap : 285923 bits (chunks), 1 dirty (0.0%)
          Bitmap : 285923 bits (chunks), 65537 dirty (22.9%)
# for i in hdb3 hdd3 hda3 ; mdadm -X /dev/$i|grep map
          Bitmap : 285923 bits (chunks), 7 dirty (0.0%)
          Bitmap : 285923 bits (chunks), 7 dirty (0.0%)
          Bitmap : 285923 bits (chunks), 65543 dirty (22.9%)
# for i in hdb3 hdd3 hda3 ; mdadm -X /dev/$i|grep map
          Bitmap : 285923 bits (chunks), 0 dirty (0.0%)
          Bitmap : 285923 bits (chunks), 0 dirty (0.0%)
          Bitmap : 285923 bits (chunks), 65536 dirty (22.9%)

# cat /proc/mdstat
Personalities : [raid1] [raid6] [raid5] [raid4]
md0 : active raid5 hdd3[2] hdb3[0] hda3[1]
      585569024 blocks level 5, 64k chunk, algorithm 2 [3/3] [UUU]
      bitmap: 0/140 pages [0KB], 1024KB chunk

Is this going to bite me later on, or just a harmless display problem?

Janos

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: mdadm -X bitmap status off by 2^16
  2006-07-18  6:46 mdadm -X bitmap status off by 2^16 Janos Farkas
@ 2006-07-18 15:30 ` Paul Clements
  2006-07-18 16:54   ` Janos Farkas
  0 siblings, 1 reply; 3+ messages in thread
From: Paul Clements @ 2006-07-18 15:30 UTC (permalink / raw)
  To: Janos Farkas, linux-raid

Janos Farkas wrote:

> # for i in hdb3 hdd3 hda3 ; mdadm -X /dev/$i|grep map
>           Bitmap : 285923 bits (chunks), 0 dirty (0.0%)
>           Bitmap : 285923 bits (chunks), 0 dirty (0.0%)
>           Bitmap : 285923 bits (chunks), 65536 dirty (22.9%)

This indicates that the _on-disk_ bits are cleared on two disks, but set 
on the third.


> # cat /proc/mdstat
> Personalities : [raid1] [raid6] [raid5] [raid4]
> md0 : active raid5 hdd3[2] hdb3[0] hda3[1]
>       585569024 blocks level 5, 64k chunk, algorithm 2 [3/3] [UUU]
>       bitmap: 0/140 pages [0KB], 1024KB chunk

This indicates that the _in-memory_ bits are all cleared.

At array startup, md initializes the in-memory bitmap from the on-disk 
copy. It then uses the in-memory bitmap from that point on, shadowing 
any changes there into the on-disk bitmap.

At the end of a rebuild (which should have happened after you added the 
third disk), the bits should all be cleared. The on-disk bits get 
cleared lazily, though. Is there any chance that they are cleared now? 
If not, it sounds like a bug to me.

--
Paul

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: mdadm -X bitmap status off by 2^16
  2006-07-18 15:30 ` Paul Clements
@ 2006-07-18 16:54   ` Janos Farkas
  0 siblings, 0 replies; 3+ messages in thread
From: Janos Farkas @ 2006-07-18 16:54 UTC (permalink / raw)
  To: Paul Clements; +Cc: linux-raid

Hi!

On 2006-07-18 at 11:30:42, Paul Clements wrote:
> >Personalities : [raid1] [raid6] [raid5] [raid4]
> >md0 : active raid5 hdd3[2] hdb3[0] hda3[1]
> >      585569024 blocks level 5, 64k chunk, algorithm 2 [3/3] [UUU]
> >      bitmap: 0/140 pages [0KB], 1024KB chunk
> This indicates that the _in-memory_ bits are all cleared.

Makes sense.

> At array startup, md initializes the in-memory bitmap from the on-disk 
> copy. It then uses the in-memory bitmap from that point on, shadowing 
> any changes there into the on-disk bitmap.
> 
> At the end of a rebuild (which should have happened after you added the 
> third disk), the bits should all be cleared. The on-disk bits get 
> cleared lazily, though. Is there any chance that they are cleared now? 
> If not, it sounds like a bug to me.

I just removed/readded the bitmap as follows, but before that, the 65536
still was there as of 5 minutes ago.

# mdadm /dev/md0 --grow -b none
# for i in hdb3 hdd3 hda3 ; mdadm -X /dev/$i|grep map
          Bitmap : 285923 bits (chunks), 3 dirty (0.0%)
          Bitmap : 285923 bits (chunks), 3 dirty (0.0%)
          Bitmap : 285923 bits (chunks), 65539 dirty (22.9%)
# for i in hdb3 hdd3 hda3 ; mdadm -X /dev/$i|grep map
          Bitmap : 285923 bits (chunks), 3 dirty (0.0%)
          Bitmap : 285923 bits (chunks), 3 dirty (0.0%)
          Bitmap : 285923 bits (chunks), 65539 dirty (22.9%)

(Bitmaps still present, probably I was just too impatient after the
removal)

# mdadm /dev/md0 --grow -b internal
# for i in hdb3 hdd3 hda3 ; mdadm -X /dev/$i|grep map
          Bitmap : 285923 bits (chunks), 285923 dirty (100.0%)
          Bitmap : 285923 bits (chunks), 285923 dirty (100.0%)
          Bitmap : 285923 bits (chunks), 285923 dirty (100.0%)
# for i in hdb3 hdd3 hda3 ; mdadm -X /dev/$i|grep map
          Bitmap : 285923 bits (chunks), 285923 dirty (100.0%)
          Bitmap : 285923 bits (chunks), 285923 dirty (100.0%)
          Bitmap : 285923 bits (chunks), 285923 dirty (100.0%)
# cat /proc/mdstat
Personalities : [raid1] [raid6] [raid5] [raid4]
md0 : active raid5 hdd3[2] hdb3[0] hda3[1]
      585569024 blocks level 5, 64k chunk, algorithm 2 [3/3] [UUU]
      bitmap: 140/140 pages [560KB], 1024KB chunk

unused devices: <none>

(Ouch, I hoped there wouldn't be another resync :)

# cat /proc/mdstat
Personalities : [raid1] [raid6] [raid5] [raid4]
md0 : active raid5 hdd3[2] hdb3[0] hda3[1]
      585569024 blocks level 5, 64k chunk, algorithm 2 [3/3] [UUU]
      bitmap: 1/140 pages [4KB], 1024KB chunk

unused devices: <none>

(Now the in-memory bitmaps seems to be emptied again)

# for i in hdb3 hdd3 hda3 ; mdadm -X /dev/$i|grep map
          Bitmap : 285923 bits (chunks), 285923 dirty (100.0%)
          Bitmap : 285923 bits (chunks), 285923 dirty (100.0%)
          Bitmap : 285923 bits (chunks), 285923 dirty (100.0%)
# for i in hdb3 hdd3 hda3 ; mdadm -X /dev/$i|grep map
          Bitmap : 285923 bits (chunks), 0 dirty (0.0%)
          Bitmap : 285923 bits (chunks), 0 dirty (0.0%)
          Bitmap : 285923 bits (chunks), 0 dirty (0.0%)

And fortunately the on disk ones too...

This discrepancy was there after at least two reboots after the whole
resync has been done.  I also did a "scrub" (check) on the array, and
it still did not change.

^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2006-07-18 16:54 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2006-07-18  6:46 mdadm -X bitmap status off by 2^16 Janos Farkas
2006-07-18 15:30 ` Paul Clements
2006-07-18 16:54   ` Janos Farkas

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).