public inbox for linux-xfs@vger.kernel.org
 help / color / mirror / Atom feed
* XFS mount failuer on RAID5
@ 2009-10-16  3:09 hank peng
  2009-10-16  8:19 ` Michael Monnerie
                   ` (2 more replies)
  0 siblings, 3 replies; 10+ messages in thread
From: hank peng @ 2009-10-16  3:09 UTC (permalink / raw)
  To: xfs; +Cc: linux-raid

Hi, all:
I have a self-built board, cpu is MPC8548(PPC arch), kernel is based
on MPC8548CDS demo board, version is 2.6.23.
A SATA controller is conected to CPU via PCIX, and I have 3 disks
attached to it.

root@Storage:~# mdadm -C /dev/md0 -l5 -n3 /dev/sd{b,c,d}
root@Storage:~# cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5]
[raid4] [multipath]
md0 : active raid5 sdd[3] sdc[1] sdb[0]
      490234624 blocks level 5, 64k chunk, algorithm 2 [3/2] [UU_]
      [>....................]  recovery =  0.8% (1990404/245117312)
finish=83.3min speed=48603K/sec

unused devices: <none>
root@Storage:~# pvcreate /dev/md0
  Physical volume "/dev/md0" successfully created
root@Storage:~# vgcreate vg /dev/md0
  Volume group "vg" successfully created
root@Storage:~# lvcreate -L 100G -n lvtest vg
  Logical volume "lvtest" created
root@Storage:~# mkfs.xfs -f -ssize=4k /dev/vg/lvtest
Warning - device mapper device, but no dmsetup(8) found
Warning - device mapper device, but no dmsetup(8) found
meta-data=/dev/vg/lvtest         isize=256    agcount=4, agsize=6553600 blks
         =                       sectsz=4096  attr=2
data     =                       bsize=4096   blocks=26214400, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096
log      =internal log           bsize=4096   blocks=12800, version=2
         =                       sectsz=4096  sunit=1 blks, lazy-count=0
realtime =none                   extsz=4096   blocks=0, rtextents=0
root@Storage:~# mkdir tmp
root@Storage:~# mount -t xfs /dev/vg/lvtest ./tmp/
Filesystem "dm-0": Disabling barriers, not supported by the underlying device
XFS mounting filesystem dm-0
XFS: totally zeroed log
Filesystem "dm-0": XFS internal error xlog_clear_stale_blocks(2) at
line 1252 of file fs/xfs/xfs_log_recover.c.  Caller 0xc018ec88
Call Trace:
[e8ab9a60] [c00091ec] show_stack+0x3c/0x1a0 (unreliable)
[e8ab9a90] [c017559c] xfs_error_report+0x50/0x60
[e8ab9aa0] [c018e84c] xlog_clear_stale_blocks+0xe4/0x1c8
[e8ab9ad0] [c018ec88] xlog_find_tail+0x358/0x494
[e8ab9b20] [c0190ba0] xlog_recover+0x20/0xf4
[e8ab9b40] [c018993c] xfs_log_mount+0x104/0x148
[e8ab9b60] [c01930f0] xfs_mountfs+0x8d4/0xd14
[e8ab9c00] [c0183f88] xfs_ioinit+0x38/0x4c
[e8ab9c20] [c019bf24] xfs_mount+0x458/0x470
[e8ab9c60] [c01b087c] vfs_mount+0x38/0x48
[e8ab9c70] [c01b052c] xfs_fs_fill_super+0x98/0x1f8
[e8ab9cf0] [c0076cec] get_sb_bdev+0x164/0x1a8
[e8ab9d40] [c01af3bc] xfs_fs_get_sb+0x1c/0x2c
[e8ab9d50] [c00769f8] vfs_kern_mount+0x58/0xe0
[e8ab9d70] [c0076ad0] do_kern_mount+0x40/0xf8
[e8ab9d90] [c008ee0c] do_mount+0x158/0x600
[e8ab9f10] [c008f344] sys_mount+0x90/0xe8
[e8ab9f40] [c0002320] ret_from_syscall+0x0/0x3c
XFS: failed to locate log tail
XFS: log mount/recovery failed: error 117
XFS: log mount failed
mount: mounting /dev/vg/lvtest on ./tmp/ failed: Structure needs cleaning


Interestingly, if remove "-ssize=4k" option in mkfs.xfs command, it is OK
root@Storage:~# mkfs.xfs -f  /dev/vg/lvtest
Warning - device mapper device, but no dmsetup(8) found
Warning - device mapper device, but no dmsetup(8) found
meta-data=/dev/vg/lvtest         isize=256    agcount=4, agsize=6553600 blks
         =                       sectsz=512   attr=2
data     =                       bsize=4096   blocks=26214400, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096
log      =internal log           bsize=4096   blocks=12800, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=0
realtime =none                   extsz=4096   blocks=0, rtextents=0
root@Storage:~# mount -t xfs /dev/vg/lvtest ./tmp/
Filesystem "dm-0": Disabling barriers, not supported by the underlying device
XFS mounting filesystem dm-0

I don't know when "-ssize=4k" option is added, what is difference?


-- 
The simplest is not all best but the best is surely the simplest!

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: XFS mount failuer on RAID5
  2009-10-16  3:09 XFS mount failuer on RAID5 hank peng
@ 2009-10-16  8:19 ` Michael Monnerie
  2009-10-16 15:27   ` Eric Sandeen
  2009-10-16  8:30 ` Justin Piszcz
  2009-10-16 15:28 ` Eric Sandeen
  2 siblings, 1 reply; 10+ messages in thread
From: Michael Monnerie @ 2009-10-16  8:19 UTC (permalink / raw)
  To: xfs

On Freitag 16 Oktober 2009 hank peng wrote:
> I don't know when "-ssize=4k" option is added, what is difference?

You can set the *sector* size to 4k only when your drives have 4k sector 
size. Normal disks so far always have 512 Bytes/sector.

mfg zmi
-- 
// Michael Monnerie, Ing.BSc    -----      http://it-management.at
// Tel: 0660 / 415 65 31                      .network.your.ideas.
// PGP Key:         "curl -s http://zmi.at/zmi.asc | gpg --import"
// Fingerprint: AC19 F9D5 36ED CD8A EF38  500E CE14 91F7 1C12 09B4
// Keyserver: wwwkeys.eu.pgp.net                  Key-ID: 1C1209B4

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: XFS mount failuer on RAID5
  2009-10-16  3:09 XFS mount failuer on RAID5 hank peng
  2009-10-16  8:19 ` Michael Monnerie
@ 2009-10-16  8:30 ` Justin Piszcz
  2009-10-16 15:28 ` Eric Sandeen
  2 siblings, 0 replies; 10+ messages in thread
From: Justin Piszcz @ 2009-10-16  8:30 UTC (permalink / raw)
  To: hank peng; +Cc: linux-raid, xfs

Hi,

2.6.23.. can you retry with 2.6.32-rcX or 2.6.31 to see if you can 
reproduce the problem?

Justin.

On Fri, 16 Oct 2009, hank peng wrote:

> Hi, all:
> I have a self-built board, cpu is MPC8548(PPC arch), kernel is based
> on MPC8548CDS demo board, version is 2.6.23.
> A SATA controller is conected to CPU via PCIX, and I have 3 disks
> attached to it.
>
> root@Storage:~# mdadm -C /dev/md0 -l5 -n3 /dev/sd{b,c,d}
> root@Storage:~# cat /proc/mdstat
> Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5]
> [raid4] [multipath]
> md0 : active raid5 sdd[3] sdc[1] sdb[0]
>      490234624 blocks level 5, 64k chunk, algorithm 2 [3/2] [UU_]
>      [>....................]  recovery =  0.8% (1990404/245117312)
> finish=83.3min speed=48603K/sec
>
> unused devices: <none>
> root@Storage:~# pvcreate /dev/md0
>  Physical volume "/dev/md0" successfully created
> root@Storage:~# vgcreate vg /dev/md0
>  Volume group "vg" successfully created
> root@Storage:~# lvcreate -L 100G -n lvtest vg
>  Logical volume "lvtest" created
> root@Storage:~# mkfs.xfs -f -ssize=4k /dev/vg/lvtest
> Warning - device mapper device, but no dmsetup(8) found
> Warning - device mapper device, but no dmsetup(8) found
> meta-data=/dev/vg/lvtest         isize=256    agcount=4, agsize=6553600 blks
>         =                       sectsz=4096  attr=2
> data     =                       bsize=4096   blocks=26214400, imaxpct=25
>         =                       sunit=0      swidth=0 blks
> naming   =version 2              bsize=4096
> log      =internal log           bsize=4096   blocks=12800, version=2
>         =                       sectsz=4096  sunit=1 blks, lazy-count=0
> realtime =none                   extsz=4096   blocks=0, rtextents=0
> root@Storage:~# mkdir tmp
> root@Storage:~# mount -t xfs /dev/vg/lvtest ./tmp/
> Filesystem "dm-0": Disabling barriers, not supported by the underlying device
> XFS mounting filesystem dm-0
> XFS: totally zeroed log
> Filesystem "dm-0": XFS internal error xlog_clear_stale_blocks(2) at
> line 1252 of file fs/xfs/xfs_log_recover.c.  Caller 0xc018ec88
> Call Trace:
> [e8ab9a60] [c00091ec] show_stack+0x3c/0x1a0 (unreliable)
> [e8ab9a90] [c017559c] xfs_error_report+0x50/0x60
> [e8ab9aa0] [c018e84c] xlog_clear_stale_blocks+0xe4/0x1c8
> [e8ab9ad0] [c018ec88] xlog_find_tail+0x358/0x494
> [e8ab9b20] [c0190ba0] xlog_recover+0x20/0xf4
> [e8ab9b40] [c018993c] xfs_log_mount+0x104/0x148
> [e8ab9b60] [c01930f0] xfs_mountfs+0x8d4/0xd14
> [e8ab9c00] [c0183f88] xfs_ioinit+0x38/0x4c
> [e8ab9c20] [c019bf24] xfs_mount+0x458/0x470
> [e8ab9c60] [c01b087c] vfs_mount+0x38/0x48
> [e8ab9c70] [c01b052c] xfs_fs_fill_super+0x98/0x1f8
> [e8ab9cf0] [c0076cec] get_sb_bdev+0x164/0x1a8
> [e8ab9d40] [c01af3bc] xfs_fs_get_sb+0x1c/0x2c
> [e8ab9d50] [c00769f8] vfs_kern_mount+0x58/0xe0
> [e8ab9d70] [c0076ad0] do_kern_mount+0x40/0xf8
> [e8ab9d90] [c008ee0c] do_mount+0x158/0x600
> [e8ab9f10] [c008f344] sys_mount+0x90/0xe8
> [e8ab9f40] [c0002320] ret_from_syscall+0x0/0x3c
> XFS: failed to locate log tail
> XFS: log mount/recovery failed: error 117
> XFS: log mount failed
> mount: mounting /dev/vg/lvtest on ./tmp/ failed: Structure needs cleaning
>
>
> Interestingly, if remove "-ssize=4k" option in mkfs.xfs command, it is OK
> root@Storage:~# mkfs.xfs -f  /dev/vg/lvtest
> Warning - device mapper device, but no dmsetup(8) found
> Warning - device mapper device, but no dmsetup(8) found
> meta-data=/dev/vg/lvtest         isize=256    agcount=4, agsize=6553600 blks
>         =                       sectsz=512   attr=2
> data     =                       bsize=4096   blocks=26214400, imaxpct=25
>         =                       sunit=0      swidth=0 blks
> naming   =version 2              bsize=4096
> log      =internal log           bsize=4096   blocks=12800, version=2
>         =                       sectsz=512   sunit=0 blks, lazy-count=0
> realtime =none                   extsz=4096   blocks=0, rtextents=0
> root@Storage:~# mount -t xfs /dev/vg/lvtest ./tmp/
> Filesystem "dm-0": Disabling barriers, not supported by the underlying device
> XFS mounting filesystem dm-0
>
> I don't know when "-ssize=4k" option is added, what is difference?
>
>
> -- 
> The simplest is not all best but the best is surely the simplest!
>
> _______________________________________________
> xfs mailing list
> xfs@oss.sgi.com
> http://oss.sgi.com/mailman/listinfo/xfs
>

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: XFS mount failuer on RAID5
  2009-10-16  8:19 ` Michael Monnerie
@ 2009-10-16 15:27   ` Eric Sandeen
  2009-10-17 21:27     ` Michael Monnerie
  0 siblings, 1 reply; 10+ messages in thread
From: Eric Sandeen @ 2009-10-16 15:27 UTC (permalink / raw)
  To: Michael Monnerie; +Cc: xfs

Michael Monnerie wrote:
> On Freitag 16 Oktober 2009 hank peng wrote:
>> I don't know when "-ssize=4k" option is added, what is difference?
> 
> You can set the *sector* size to 4k only when your drives have 4k sector 
> size. Normal disks so far always have 512 Bytes/sector.
> 
> mfg zmi

Actually -ssize=4k is just fine even on 512 sector disks.

-Eric

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: XFS mount failuer on RAID5
  2009-10-16  3:09 XFS mount failuer on RAID5 hank peng
  2009-10-16  8:19 ` Michael Monnerie
  2009-10-16  8:30 ` Justin Piszcz
@ 2009-10-16 15:28 ` Eric Sandeen
  2009-10-16 15:55   ` Eric Sandeen
  2009-10-19  0:54   ` hank peng
  2 siblings, 2 replies; 10+ messages in thread
From: Eric Sandeen @ 2009-10-16 15:28 UTC (permalink / raw)
  To: hank peng; +Cc: linux-raid, xfs

hank peng wrote:
> Hi, all:
> I have a self-built board, cpu is MPC8548(PPC arch), kernel is based
> on MPC8548CDS demo board, version is 2.6.23.
> A SATA controller is conected to CPU via PCIX, and I have 3 disks
> attached to it.
> 
...

> root@Storage:~# mkfs.xfs -f -ssize=4k /dev/vg/lvtest
> Warning - device mapper device, but no dmsetup(8) found
> Warning - device mapper device, but no dmsetup(8) found
> meta-data=/dev/vg/lvtest         isize=256    agcount=4, agsize=6553600 blks
>          =                       sectsz=4096  attr=2
> data     =                       bsize=4096   blocks=26214400, imaxpct=25
>          =                       sunit=0      swidth=0 blks
> naming   =version 2              bsize=4096
> log      =internal log           bsize=4096   blocks=12800, version=2
>          =                       sectsz=4096  sunit=1 blks, lazy-count=0
> realtime =none                   extsz=4096   blocks=0, rtextents=0
> root@Storage:~# mkdir tmp
> root@Storage:~# mount -t xfs /dev/vg/lvtest ./tmp/
> Filesystem "dm-0": Disabling barriers, not supported by the underlying device
> XFS mounting filesystem dm-0
> XFS: totally zeroed log
> Filesystem "dm-0": XFS internal error xlog_clear_stale_blocks(2) at
> line 1252 of file fs/xfs/xfs_log_recover.c.  Caller 0xc018ec88


Can you try the patch that Andy Poling posted to the list just 
yesterday?  Slight longshot but it may be it.

Otherwise I will look at this in a bit; on the road today though.

-Eric

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: XFS mount failuer on RAID5
  2009-10-16 15:28 ` Eric Sandeen
@ 2009-10-16 15:55   ` Eric Sandeen
  2009-10-19  0:54   ` hank peng
  1 sibling, 0 replies; 10+ messages in thread
From: Eric Sandeen @ 2009-10-16 15:55 UTC (permalink / raw)
  To: hank peng; +Cc: linux-raid, xfs

Eric Sandeen wrote:
> hank peng wrote:
>> Hi, all:
>> I have a self-built board, cpu is MPC8548(PPC arch), kernel is based
>> on MPC8548CDS demo board, version is 2.6.23.
>> A SATA controller is conected to CPU via PCIX, and I have 3 disks
>> attached to it.
>>
> ...
> 
>> root@Storage:~# mkfs.xfs -f -ssize=4k /dev/vg/lvtest
>> Warning - device mapper device, but no dmsetup(8) found
>> Warning - device mapper device, but no dmsetup(8) found
>> meta-data=/dev/vg/lvtest         isize=256    agcount=4, 
>> agsize=6553600 blks
>>          =                       sectsz=4096  attr=2
>> data     =                       bsize=4096   blocks=26214400, imaxpct=25
>>          =                       sunit=0      swidth=0 blks
>> naming   =version 2              bsize=4096
>> log      =internal log           bsize=4096   blocks=12800, version=2
>>          =                       sectsz=4096  sunit=1 blks, lazy-count=0
>> realtime =none                   extsz=4096   blocks=0, rtextents=0
>> root@Storage:~# mkdir tmp
>> root@Storage:~# mount -t xfs /dev/vg/lvtest ./tmp/
>> Filesystem "dm-0": Disabling barriers, not supported by the underlying 
>> device
>> XFS mounting filesystem dm-0
>> XFS: totally zeroed log
>> Filesystem "dm-0": XFS internal error xlog_clear_stale_blocks(2) at
>> line 1252 of file fs/xfs/xfs_log_recover.c.  Caller 0xc018ec88
> 
> 
> Can you try the patch that Andy Poling posted to the list just 
> yesterday?  Slight longshot but it may be it.
> 
> Otherwise I will look at this in a bit; on the road today though.
> 
> -Eric

Actually you might try a newer xfsprogs and/or kernel; if I do this on a 
loopback file, creating the same geometry as you have:

mkfs.xfs -dfile,name=fsfile,size=2621440b -lsize=12800b,lazy-count=0 
-ssize=4096

it mounts fine w/ latest xfsprogs and a 2.6.30 kernel.

-Eric


_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: XFS mount failuer on RAID5
  2009-10-16 15:27   ` Eric Sandeen
@ 2009-10-17 21:27     ` Michael Monnerie
  2009-10-19  1:30       ` Dave Chinner
  0 siblings, 1 reply; 10+ messages in thread
From: Michael Monnerie @ 2009-10-17 21:27 UTC (permalink / raw)
  To: xfs

On Freitag 16 Oktober 2009 Eric Sandeen wrote:
> Actually -ssize=4k is just fine even on 512 sector disks.

Oh funny. So what's the meaning of this argument then? Or why would one 
set it to 4k? What's the diff with 512b?

mfg zmi
-- 
// Michael Monnerie, Ing.BSc    -----      http://it-management.at
// Tel: 0660 / 415 65 31                      .network.your.ideas.
// PGP Key:         "curl -s http://zmi.at/zmi.asc | gpg --import"
// Fingerprint: AC19 F9D5 36ED CD8A EF38  500E CE14 91F7 1C12 09B4
// Keyserver: wwwkeys.eu.pgp.net                  Key-ID: 1C1209B4

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: XFS mount failuer on RAID5
  2009-10-16 15:28 ` Eric Sandeen
  2009-10-16 15:55   ` Eric Sandeen
@ 2009-10-19  0:54   ` hank peng
  1 sibling, 0 replies; 10+ messages in thread
From: hank peng @ 2009-10-19  0:54 UTC (permalink / raw)
  To: Eric Sandeen; +Cc: linux-raid, xfs

2009/10/16 Eric Sandeen <sandeen@sandeen.net>:
> hank peng wrote:
>>
>> Hi, all:
>> I have a self-built board, cpu is MPC8548(PPC arch), kernel is based
>> on MPC8548CDS demo board, version is 2.6.23.
>> A SATA controller is conected to CPU via PCIX, and I have 3 disks
>> attached to it.
>>
> ...
>
>> root@Storage:~# mkfs.xfs -f -ssize=4k /dev/vg/lvtest
>> Warning - device mapper device, but no dmsetup(8) found
>> Warning - device mapper device, but no dmsetup(8) found
>> meta-data=/dev/vg/lvtest         isize=256    agcount=4, agsize=6553600
>> blks
>>         =                       sectsz=4096  attr=2
>> data     =                       bsize=4096   blocks=26214400, imaxpct=25
>>         =                       sunit=0      swidth=0 blks
>> naming   =version 2              bsize=4096
>> log      =internal log           bsize=4096   blocks=12800, version=2
>>         =                       sectsz=4096  sunit=1 blks, lazy-count=0
>> realtime =none                   extsz=4096   blocks=0, rtextents=0
>> root@Storage:~# mkdir tmp
>> root@Storage:~# mount -t xfs /dev/vg/lvtest ./tmp/
>> Filesystem "dm-0": Disabling barriers, not supported by the underlying
>> device
>> XFS mounting filesystem dm-0
>> XFS: totally zeroed log
>> Filesystem "dm-0": XFS internal error xlog_clear_stale_blocks(2) at
>> line 1252 of file fs/xfs/xfs_log_recover.c.  Caller 0xc018ec88
>
>
> Can you try the patch that Andy Poling posted to the list just yesterday?
>  Slight longshot but it may be it.
>
> Otherwise I will look at this in a bit; on the road today though.
>
Reason is found, it is because there are some problem in our hardware
XOR driver firmware. I am sure now it is not related with XFS.
Thx to all you guys, we will try to fix it.

> -Eric
>



-- 
The simplest is not all best but the best is surely the simplest!

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: XFS mount failuer on RAID5
  2009-10-17 21:27     ` Michael Monnerie
@ 2009-10-19  1:30       ` Dave Chinner
  2009-10-19  3:55         ` Christoph Hellwig
  0 siblings, 1 reply; 10+ messages in thread
From: Dave Chinner @ 2009-10-19  1:30 UTC (permalink / raw)
  To: Michael Monnerie; +Cc: xfs

On Sat, Oct 17, 2009 at 11:27:04PM +0200, Michael Monnerie wrote:
> On Freitag 16 Oktober 2009 Eric Sandeen wrote:
> > Actually -ssize=4k is just fine even on 512 sector disks.
> 
> Oh funny. So what's the meaning of this argument then? Or why would one 
> set it to 4k? What's the diff with 512b?

A hardware sector is the atomic unit of IO. 4k sectors on 512b
hardware sectors means that a single 4k filesystem sector write is
not necessarily atomic. This can lead to problems with torn writes
at power loss or sub-filesystem-sector data loss/corruption when a
hardware sector goes bad. In general, these are detected no
differently to the same sector loss on a 512b filesystem sector
filesystem.

IIRC, the main reason for 4k sectors on MD RAID5/6 is that changing
the IO alignment from 4k to 512 byte IOs (i.e. sub-page sized)
causes MD to flush and invalidate the stripe cache. Hence every
time XFS writes a super block, AGF, AGFL or AGI, things go much
slower because of this flush/invalidate. By setting the sector size
to 4k, the SB/AGF/AGFL/AGI are all 4k in size and hence IO alignment
never changes and hence performance remains good.

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: XFS mount failuer on RAID5
  2009-10-19  1:30       ` Dave Chinner
@ 2009-10-19  3:55         ` Christoph Hellwig
  0 siblings, 0 replies; 10+ messages in thread
From: Christoph Hellwig @ 2009-10-19  3:55 UTC (permalink / raw)
  To: Dave Chinner; +Cc: Michael Monnerie, xfs

On Mon, Oct 19, 2009 at 12:30:18PM +1100, Dave Chinner wrote:
> 
> IIRC, the main reason for 4k sectors on MD RAID5/6 is that changing
> the IO alignment from 4k to 512 byte IOs (i.e. sub-page sized)
> causes MD to flush and invalidate the stripe cache. Hence every
> time XFS writes a super block, AGF, AGFL or AGI, things go much
> slower because of this flush/invalidate. By setting the sector size
> to 4k, the SB/AGF/AGFL/AGI are all 4k in size and hence IO alignment

This should not happen anymore with 2.6 series kernels.

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2009-10-19  3:54 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2009-10-16  3:09 XFS mount failuer on RAID5 hank peng
2009-10-16  8:19 ` Michael Monnerie
2009-10-16 15:27   ` Eric Sandeen
2009-10-17 21:27     ` Michael Monnerie
2009-10-19  1:30       ` Dave Chinner
2009-10-19  3:55         ` Christoph Hellwig
2009-10-16  8:30 ` Justin Piszcz
2009-10-16 15:28 ` Eric Sandeen
2009-10-16 15:55   ` Eric Sandeen
2009-10-19  0:54   ` hank peng

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox