I have run into trouble with XFS, but excuse me if this question has
been asked a dozens times.
I'm am filling a very big file on a XFS filesystem on Linux that
stands on a software RAID 0. Performance are very good until I get 2
"holes" during which my write stalls for a few seconds.
Mkfs parameters:
mkxfs.xfs -b size 4096 -s size 4096 -d agcount=2 -i size=2048
The RAID0 is done a 2 SATA disks of 500 GB each.
My test is just running "dd" with 8M blocks:
dd if=/dev/zero of=/DATA/big
(/DATA is the XFS file system)
The system is basically a RHEL5 with a 2.6.18 kernel and XFS
packages from CentOS.
The problem happens 2 times: one time around 210 GB and the second
time around 688 GB (hole in performance and response time is bigger
the second time -- around 20 seconds)
Do you have any clue ? Do my mkfs parameters make sense ? The goal
here is really to have something that is able to store big files at
a constant throughput -- the test is done on purpose.