From: Dave Chinner <david@fromorbit.com>
To: Yannis Klonatos <klonatos@ics.forth.gr>
Cc: xfs@oss.sgi.com
Subject: Re: XFS peculiar behavior
Date: Thu, 24 Jun 2010 09:17:00 +1000 [thread overview]
Message-ID: <20100623231700.GP6590@dastard> (raw)
In-Reply-To: <4C21B9AF.9010307@ics.forth.gr>
On Wed, Jun 23, 2010 at 10:37:19AM +0300, Yannis Klonatos wrote:
> Hi all!
>
> I have come across the following peculiar behavior in XFS
> and i would appreciate any information anyone
> could provide.
> In our lab we have a system that has twelve 500GByte hard
> disks (total capacity 6TByte), connected to an
> Areca (ARC-1680D-IX-12) SAS storage controller. The disks are
> configured as a RAID-0 device. Then I create
> a clean XFS filesystem on top of the raid volume, using the whole
> capacity. We use this test-setup to measure
> performance improvement for a TPC-H experiment. We copy the database
> over the clean XFS filesystem using the
> cp utility. The database used in our experiments is 56GBytes in size
> (data + indices).
> The problem is that i have noticed that XFS may - not all
> times - split a table over a large disk distance. For
> example in one run i have noticed that a file of 13GByte is split
> over a 4,7TByte distance (I calculate this distance
> by subtracting the final block used for the file with the first one.
> The two disk blocks values are acquired using the
> FIBMAP ioctl).
> Is there some reasoning behind this (peculiar) behavior? I
> would expect that since the underlying storage is so
> large, and the dataset is so small, XFS would try to minimize disk
> seeks and thus place the file sequentially in disk.
> Furthermore, I understand that there may be some blocks left unused
> by XFS between subsequent file blocks used
> in order to handle any write appends that may come afterward. But i
> wouldn't expect such a large splitting of a single
> file.
> Any help?
The reasons for it being split are wide and varied. We need more
information before trying to determie the reason.
The output of "xfs_info <mntpt>" will tell us your filesystem
geometry and the output of xfs_bmap <split file> will tell us
exactly how it was laid out on disk. These are needed to see exactly
what the problem is.
Did you copy the file alone, with others, or while there were other
write operations going on in the background? was it a pristine
filesystem that you copied it to? If so, what was the directory
structure created before/by the copy?
Also, the kernel version you are running, and the version of
xfsprogs you have installed (xfs_info -V) will help us determine if
you are tripping any known bugs...
Cheers,
Dave.
--
Dave Chinner
david@fromorbit.com
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
next prev parent reply other threads:[~2010-06-23 23:14 UTC|newest]
Thread overview: 11+ messages / expand[flat|nested] mbox.gz Atom feed top
2010-06-23 7:37 XFS peculiar behavior Yannis Klonatos
2010-06-23 10:16 ` Michael Monnerie
2010-06-23 10:24 ` Andi Kleen
2010-06-23 15:04 ` Michael Monnerie
2010-06-23 16:21 ` Eric Sandeen
2010-06-23 23:17 ` Dave Chinner [this message]
2010-06-24 14:11 ` Yannis Klonatos
2010-06-24 15:21 ` Eric Sandeen
2010-06-24 15:35 ` Yannis Klonatos
2010-06-25 0:58 ` Dave Chinner
2010-06-25 0:46 ` Dave Chinner
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20100623231700.GP6590@dastard \
--to=david@fromorbit.com \
--cc=klonatos@ics.forth.gr \
--cc=xfs@oss.sgi.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox