public inbox for linux-xfs@vger.kernel.org
 help / color / mirror / Atom feed
From: Brian Foster <bfoster@redhat.com>
To: Mark Tinguely <tinguely@sgi.com>
Cc: Dave Chinner <dchinner@redhat.com>, xfs@oss.sgi.com
Subject: Re: xfs speculative preallocation -- fragmentation issue with sparse file handling?
Date: Mon, 18 Feb 2013 18:29:02 -0500	[thread overview]
Message-ID: <5122B93E.7060702@redhat.com> (raw)
In-Reply-To: <51229C21.4040102@sgi.com>

On 02/18/2013 04:24 PM, Mark Tinguely wrote:
> On 02/18/13 15:08, Brian Foster wrote:
>> Hi guys,
>>
>> I was running a sanity check of my quota throttling stuff rebased
>> against the updated speculative prealloc algorithm:
>>
>> a1e16c26 xfs: limit speculative prealloc size on sparse files
>>
>> ... and ran into an interesting behavior on my baseline test (quota
>> disabled).
>>
>> The test I'm running is a concurrent write of 32 files (10GB each) via
>> iozone (I'm not testing performance, just using it as a concurrent
>> writer):
>>
>> iozone -w -c -e -i 0 -+n -r 4k -s 10g -t 32 -F /mnt/data/file{0..31}
>>
>> ... what I noticed is that from monitoring du during the test,
>> speculative preallocation seemed to be ineffective. From further
>> tracing, I observed that imap[0].br_blockcount in
>> xfs_iomap_eof_prealloc_initial_size() was fairly consistently maxed out
>> at around 32768 blocks (128MB).
>>
>> Without the aforementioned commit, preallocation occurs as expected and
>> the files result in 7-9 extents after the test. With the commit, I'm in
>> the 70s to 80s range of number of extents with a max extent size of
>> 128MB. A couple examples of xfs_bmap output are appended to this
>> message. It seems like initial fragmentation might be throwing the
>> algorithm out of whack..?
>>
>> Brian
> 
> ... the patched version increases in doubles
> 
> +    if (imap[0].br_startblock == HOLESTARTBLOCK)
> +        return 0;
> 
>     vvvvvv
> +    if (imap[0].br_blockcount <= (MAXEXTLEN >> 1))
> +        return imap[0].br_blockcount;
>     ^^^^^^
> 
> +    return XFS_B_TO_FSB(mp, XFS_ISIZE(ip));
> +}
> 
> have you experimented without the middle if statement.
> If I remember correctly when I reviewed the code, that should be moving
> code closer to the original code; namely use the file size as the
> preallocation value.
> 

Just a quick update...

I've tested the change above and a suggestion Dave made on IRC to return
(imap[0].br_blockcount << 1) and both resolve the immediate issue. I
need to verify the original test case still works and I'll post a patch.
Thanks...

Brian

> --Mark.

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

  reply	other threads:[~2013-02-18 23:31 UTC|newest]

Thread overview: 4+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2013-02-18 21:08 xfs speculative preallocation -- fragmentation issue with sparse file handling? Brian Foster
2013-02-18 21:24 ` Mark Tinguely
2013-02-18 23:29   ` Brian Foster [this message]
2013-02-18 23:39     ` Dave Chinner

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=5122B93E.7060702@redhat.com \
    --to=bfoster@redhat.com \
    --cc=dchinner@redhat.com \
    --cc=tinguely@sgi.com \
    --cc=xfs@oss.sgi.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox