From: Dave Chinner <david@fromorbit.com>
To: Carsten Oberscheid <oberscheid@doctronic.de>
Cc: xfs@oss.sgi.com
Subject: Re: Strange fragmentation in nearly empty filesystem
Date: Sat, 24 Jan 2009 11:33:29 +1100 [thread overview]
Message-ID: <20090124003329.GE32390@disturbed> (raw)
In-Reply-To: <20090123102130.GB8012@doctronic.de>
On Fri, Jan 23, 2009 at 11:21:30AM +0100, Carsten Oberscheid wrote:
> Hi there,
>
> I am experiencing my XFS filesystem degrading over time in quite a
> strange and annoying way. Googling "XFS fragmenation" tells me either
> that this does not happen or to use xfs_fsr, which doesn't really help
> me anymore -- see below. I'd appreciate any help on this.
>
> Background: I am using two VMware virtual machines on my Linux
> desktop. These virtual machines store images of their main memory in
> .vmem files, which are about half a gigabyte in size for each of my
> VMs. The .vmem files are created when starting the VM, written when
> suspending it and read when resuming. I prefer suspendig and resuming
> over shutting down and booting again, so with my VMs these files can
> have a lifetime of several weeks.
Oh, that's vmware being incredibly stupid about how they write
out the memory images. They only write pages that are allocated
and it's sparse file full of holes. Effectively this guarantees
file fragmentation over time as random holes are filled. For
example, a .vmem file on a recent VM I built:
$ xfs_bmap -vvp foo.vmem |grep hole |wc -l
675
$ xfs_bmap -vvp foo.vmem |grep -v hole |wc -l
885
$
Contains 675 holes and almost 900 real extents in a 512MB memory
image that has only 160MB of data blocks allocated.
In reality, this is a classic case of the application doing a "smart
optimisation" that looks good in the short term (i.e. saves some
disk space), but that has very bad long term side effects (i.e.
guaranteed fragmentation of the file in the long term).
You might be able to pre-allocate the .vmem files after the file
is created with an xfs_io hack prior to it being badly fragmented;
that should avoid the worse case fragmentation caused by writing
randomly to a sparse file.
In summary, this is an application problem, not a filesystem
issue.
Cheers,
Dave.
--
Dave Chinner
david@fromorbit.com
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
next prev parent reply other threads:[~2009-01-24 0:34 UTC|newest]
Thread overview: 14+ messages / expand[flat|nested] mbox.gz Atom feed top
2009-01-23 10:21 Strange fragmentation in nearly empty filesystem Carsten Oberscheid
2009-01-23 15:25 ` Eric Sandeen
2009-01-24 0:33 ` Dave Chinner [this message]
2009-01-26 7:57 ` Carsten Oberscheid
2009-01-26 18:37 ` Eric Sandeen
2009-01-27 7:10 ` Carsten Oberscheid
2009-01-27 8:40 ` Carsten Oberscheid
2009-01-27 9:30 ` Michael Monnerie
2009-01-27 14:39 ` Carsten Oberscheid
2009-01-27 13:30 ` Eric Sandeen
2009-01-27 14:37 ` Carsten Oberscheid
2009-01-27 15:41 ` Felix Blyakher
2009-01-27 17:26 ` Eric Sandeen
2009-01-27 17:42 ` Felix Blyakher
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20090124003329.GE32390@disturbed \
--to=david@fromorbit.com \
--cc=oberscheid@doctronic.de \
--cc=xfs@oss.sgi.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox