* Very small internal bitmap after recreate
@ 2007-11-02 9:01 Ralf Müller
[not found] ` <18218.60496.499724.710572@notabene.brown>
0 siblings, 1 reply; 6+ messages in thread
From: Ralf Müller @ 2007-11-02 9:01 UTC (permalink / raw)
To: linux-raid
I have a 5 disk version 1.0 superblock RAID5 which had an internal
bitmap that has been reported to have a size of 299 pages in /proc/
mdstat. For whatever reason I removed this bitmap (mdadm --grow --
bitmap=none) and recreated it afterwards (mdadm --grow --
bitmap=internal). Now it has a reported size of 10 pages.
Do I have a problem?
# cat /proc/mdstat
Personalities : [raid1] [raid6] [raid5] [raid4]
md1 : active raid5 sdd1[0] sdh1[5] sdf1[4] sdg1[3] sde1[1]
1250273792 blocks super 1.0 level 5, 128k chunk, algorithm 2
[5/5] [UUUUU]
bitmap: 0/10 pages [0KB], 16384KB chunk
# mdadm -X /dev/sdg1
Filename : /dev/sdg1
Magic : 6d746962
Version : 4
UUID : e1a335a8:fc0f0626:d70687a6:5d9a9c19
Events : 408088
Events Cleared : 408088
State : OK
Chunksize : 16 MB
Daemon : 5s flush period
Write Mode : Normal
Sync Size : 312568448 (298.09 GiB 320.07 GB)
Bitmap : 19078 bits (chunks), 0 dirty (0.0%)
# mdadm --version
mdadm - v2.6.2 - 21st May 2007
# uname -a
Linux DatenGrab 2.6.22.9-0.4-default #1 SMP 2007/10/05 21:32:04 UTC
i686 i686 i386 GNU/Linux
Regards Ralf
^ permalink raw reply [flat|nested] 6+ messages in thread[parent not found: <18218.60496.499724.710572@notabene.brown>]
* Re: Very small internal bitmap after recreate [not found] ` <18218.60496.499724.710572@notabene.brown> @ 2007-11-02 10:22 ` Ralf Müller 2007-11-02 10:29 ` Ralf Müller 2007-11-02 11:43 ` Neil Brown 0 siblings, 2 replies; 6+ messages in thread From: Ralf Müller @ 2007-11-02 10:22 UTC (permalink / raw) To: linux-raid Am 02.11.2007 um 10:22 schrieb Neil Brown: > On Friday November 2, ralf@bj-ig.de wrote: >> I have a 5 disk version 1.0 superblock RAID5 which had an internal >> bitmap that has been reported to have a size of 299 pages in /proc/ >> mdstat. For whatever reason I removed this bitmap (mdadm --grow -- >> bitmap=none) and recreated it afterwards (mdadm --grow -- >> bitmap=internal). Now it has a reported size of 10 pages. >> >> Do I have a problem? > > Not a big problem, but possibly a small problem. > Can you send > mdadm -E /dev/sdg1 > as well? Sure: # mdadm -E /dev/sdg1 /dev/sdg1: Magic : a92b4efc Version : 01 Feature Map : 0x1 Array UUID : e1a335a8:fc0f0626:d70687a6:5d9a9c19 Name : 1 Creation Time : Wed Oct 31 14:30:55 2007 Raid Level : raid5 Raid Devices : 5 Used Dev Size : 625137008 (298.09 GiB 320.07 GB) Array Size : 2500547584 (1192.35 GiB 1280.28 GB) Used Size : 625136896 (298.09 GiB 320.07 GB) Super Offset : 625137264 sectors State : clean Device UUID : 95afade2:f2ab8e83:b0c764a0:4732827d Internal Bitmap : 2 sectors from superblock Update Time : Fri Nov 2 07:46:38 2007 Checksum : 4ee307b3 - correct Events : 408088 Layout : left-symmetric Chunk Size : 128K Array Slot : 3 (0, 1, failed, 2, 3, 4) Array State : uuUuu 1 failed This time I'm getting nervous - Array State failed doesn't sound good! Regards Ralf ^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: Very small internal bitmap after recreate 2007-11-02 10:22 ` Ralf Müller @ 2007-11-02 10:29 ` Ralf Müller 2007-11-02 11:43 ` Neil Brown 1 sibling, 0 replies; 6+ messages in thread From: Ralf Müller @ 2007-11-02 10:29 UTC (permalink / raw) To: Neil Brown; +Cc: linux-raid Am 02.11.2007 um 11:22 schrieb Ralf Müller: > > # mdadm -E /dev/sdg1 > /dev/sdg1: > Magic : a92b4efc > Version : 01 > Feature Map : 0x1 > Array UUID : e1a335a8:fc0f0626:d70687a6:5d9a9c19 > Name : 1 > Creation Time : Wed Oct 31 14:30:55 2007 > Raid Level : raid5 > Raid Devices : 5 > > Used Dev Size : 625137008 (298.09 GiB 320.07 GB) > Array Size : 2500547584 (1192.35 GiB 1280.28 GB) > Used Size : 625136896 (298.09 GiB 320.07 GB) > Super Offset : 625137264 sectors > State : clean > Device UUID : 95afade2:f2ab8e83:b0c764a0:4732827d > > Internal Bitmap : 2 sectors from superblock > Update Time : Fri Nov 2 07:46:38 2007 > Checksum : 4ee307b3 - correct > Events : 408088 > > Layout : left-symmetric > Chunk Size : 128K > > Array Slot : 3 (0, 1, failed, 2, 3, 4) > Array State : uuUuu 1 failed > > This time I'm getting nervous - Array State failed doesn't sound good! Just to make it clear - the array is still reported active by in / proc/mdstat and behaves well - no failed devices: md1 : active raid5 sdd1[0] sdh1[5] sdf1[4] sdg1[3] sde1[1] 1250273792 blocks super 1.0 level 5, 128k chunk, algorithm 2 [5/5] [UUUUU] bitmap: 0/10 pages [0KB], 16384KB chunk Regards Ralf - To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html ^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: Very small internal bitmap after recreate 2007-11-02 10:22 ` Ralf Müller 2007-11-02 10:29 ` Ralf Müller @ 2007-11-02 11:43 ` Neil Brown 2007-11-02 12:46 ` Ralf Müller 2008-02-26 17:35 ` internal bitmap size Hubert Verstraete 1 sibling, 2 replies; 6+ messages in thread From: Neil Brown @ 2007-11-02 11:43 UTC (permalink / raw) To: Ralf Müller; +Cc: linux-raid On Friday November 2, ralf@bj-ig.de wrote: > > Am 02.11.2007 um 10:22 schrieb Neil Brown: > > > On Friday November 2, ralf@bj-ig.de wrote: > >> I have a 5 disk version 1.0 superblock RAID5 which had an internal > >> bitmap that has been reported to have a size of 299 pages in /proc/ > >> mdstat. For whatever reason I removed this bitmap (mdadm --grow -- > >> bitmap=none) and recreated it afterwards (mdadm --grow -- > >> bitmap=internal). Now it has a reported size of 10 pages. > >> > >> Do I have a problem? > > > > Not a big problem, but possibly a small problem. > > Can you send > > mdadm -E /dev/sdg1 > > as well? > > Sure: > > # mdadm -E /dev/sdg1 > /dev/sdg1: > Magic : a92b4efc > Version : 01 > Feature Map : 0x1 > Array UUID : e1a335a8:fc0f0626:d70687a6:5d9a9c19 > Name : 1 > Creation Time : Wed Oct 31 14:30:55 2007 > Raid Level : raid5 > Raid Devices : 5 > > Used Dev Size : 625137008 (298.09 GiB 320.07 GB) > Array Size : 2500547584 (1192.35 GiB 1280.28 GB) > Used Size : 625136896 (298.09 GiB 320.07 GB) > Super Offset : 625137264 sectors So there is 256 sectors before the superblock were a bitmap could go, or about 6 sectors afterwards.... > State : clean > Device UUID : 95afade2:f2ab8e83:b0c764a0:4732827d > > Internal Bitmap : 2 sectors from superblock And the '6 sectors afterwards' was chosen. 6 sectors has room for 5*512*8 = 20480 bits, and from your previous email: > Bitmap : 19078 bits (chunks), 0 dirty (0.0%) you have 19078 bits, which is about right (a the bitmap chunk size must be a power of 2). So the problem is that "mdadm -G" is putting the bitmap after the superblock rather than considering the space before.... (checks code) Ahh, I remember now. There is currently no interface to tell the kernel where to put the bitmap when creating one on an active array, so it always puts in the 'safe' place. Another enhancement waiting for time. For now, you will have to live with a smallish bitmap, which probably isn't a real problem. With 19078 bits, you will still get a several-thousand-fold increase it resync speed after a crash (i.e. hours become seconds) and to some extent, fewer bits are better and you have to update them less. I've haven't made any measurements to see what size bitmap is ideal... maybe someone should :-) > Update Time : Fri Nov 2 07:46:38 2007 > Checksum : 4ee307b3 - correct > Events : 408088 > > Layout : left-symmetric > Chunk Size : 128K > > Array Slot : 3 (0, 1, failed, 2, 3, 4) > Array State : uuUuu 1 failed > > This time I'm getting nervous - Array State failed doesn't sound good! This is nothing to worry about - just a bad message from mdadm. The superblock has recorded that there was once a device in position 2 which is now failed (See the list in "Array Slot"). This summaries as "1 failed" in "Array State". But the array is definitely working OK now. NeilBrown ^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: Very small internal bitmap after recreate 2007-11-02 11:43 ` Neil Brown @ 2007-11-02 12:46 ` Ralf Müller 2008-02-26 17:35 ` internal bitmap size Hubert Verstraete 1 sibling, 0 replies; 6+ messages in thread From: Ralf Müller @ 2007-11-02 12:46 UTC (permalink / raw) To: Neil Brown; +Cc: linux-raid Am 02.11.2007 um 12:43 schrieb Neil Brown: > > For now, you will have to live with a smallish bitmap, which probably > isn't a real problem. Ok then. >> Array Slot : 3 (0, 1, failed, 2, 3, 4) >> Array State : uuUuu 1 failed >> >> This time I'm getting nervous - Array State failed doesn't sound >> good! > > This is nothing to worry about - just a bad message from mdadm. > > The superblock has recorded that there was once a device in position 2 > which is now failed (See the list in "Array Slot"). > This summaries as "1 failed" in "Array State". > > But the array is definitely working OK now. Good to know. Thanks a lot Ralf ^ permalink raw reply [flat|nested] 6+ messages in thread
* internal bitmap size 2007-11-02 11:43 ` Neil Brown 2007-11-02 12:46 ` Ralf Müller @ 2008-02-26 17:35 ` Hubert Verstraete 1 sibling, 0 replies; 6+ messages in thread From: Hubert Verstraete @ 2008-02-26 17:35 UTC (permalink / raw) To: Neil Brown, linux-raid Hi Neil, Neil Brown wrote: > For now, you will have to live with a smallish bitmap, which probably > isn't a real problem. With 19078 bits, you will still get a > several-thousand-fold increase it resync speed after a crash > (i.e. hours become seconds) and to some extent, fewer bits are better > and you have to update them less. > > I've haven't made any measurements to see what size bitmap is > ideal... maybe someone should :-) I've made some tries with a 4 250GB disks RAID-5 array and the write speed is really ugly with the default internal bitmap size. Setting a bigger bitmap chunk size (16 MB for example) creates a small bitmap. The write speed is then almost the same as when there is no bitmap, which is great. And as you said, the resync is a matter of seconds (or minutes) instead of hours (without bitmap). With such a setting, I've got both a nice write speed and a nice resync speed. That's where I would look at to find MY ideal bitmap size. Neil, is there an issue to use the --bitmap-chunk option while using an internal bitmap ? The man page does not clearly say to avoid using it and the mdadm source code does not prevent setting this size in the case of an internal bitmap. Regarding my measurements, it showed that the default bitmap size is 4x slower than a small bitmap in v1.0 superblock. With v1.1 and v1.2 superblocks, the default is 2x slower. We are far from the 10% slowdown claimed by http://linux-raid.osdl.org I've also tried to play with the bitmap flush period (--delay option), but I would need to understand how it works to see if this can have an effect on performance. Regards, Hubert ^ permalink raw reply [flat|nested] 6+ messages in thread
end of thread, other threads:[~2008-02-26 17:35 UTC | newest]
Thread overview: 6+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2007-11-02 9:01 Very small internal bitmap after recreate Ralf Müller
[not found] ` <18218.60496.499724.710572@notabene.brown>
2007-11-02 10:22 ` Ralf Müller
2007-11-02 10:29 ` Ralf Müller
2007-11-02 11:43 ` Neil Brown
2007-11-02 12:46 ` Ralf Müller
2008-02-26 17:35 ` internal bitmap size Hubert Verstraete
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).