public inbox for linux-xfs@vger.kernel.org
 help / color / mirror / Atom feed
* slow xfs writes on loopback mounted xfs with dd + seek
@ 2012-09-11  9:48 Colin Ian King
  2012-09-11 14:52 ` Brian Foster
  0 siblings, 1 reply; 4+ messages in thread
From: Colin Ian King @ 2012-09-11  9:48 UTC (permalink / raw)
  To: xfs

Hi,

I've seeing really slow I/O writes on xfs when doing a dd with a seek 
offset to a file on an xfs file system which is loop mounted.

Reproduced on Linux 3.6.0-rc5 and 3.4

How to reproduce:

dd if=/dev/zero of=xfs.img bs=1M count=1024
mkfs.xfs -f xfs.img
sudo mount -o loop -t xfs xfs.img /mnt/test

First create a large file, write performance is excellent:

sudo dd if=/dev/zero of=/mnt/test/big bs=1M count=500
500+0 records in
500+0 records out
524288000 bytes (524 MB) copied, 1.69451 s, 309 MB/s

..next seek and write some more blocks, write performance is poor:

sudo dd if=/dev/zero of=/mnt/test/big obs=4K count=8192 seek=131072
8192+0 records in
1024+0 records out
4194304 bytes (4.2 MB) copied, 47.0644 s, 89.1 kB/s

Using blktrace and seektracer I've captured the I/O on the block device 
containing the xfs.img and I'm seeing ~55-70 seeks per second during the 
slow writes, which seems excessive.

I can reproduce this on hardware with 1, 4 or 8 CPUs.

I've testing this with other file systems I and don't see this issue, so 
it looks like an xfs + loop mounted issue.

Is this a known performance "feature"?

Colin

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: slow xfs writes on loopback mounted xfs with dd + seek
  2012-09-11  9:48 slow xfs writes on loopback mounted xfs with dd + seek Colin Ian King
@ 2012-09-11 14:52 ` Brian Foster
  2012-09-11 20:50   ` Dave Chinner
  0 siblings, 1 reply; 4+ messages in thread
From: Brian Foster @ 2012-09-11 14:52 UTC (permalink / raw)
  To: Colin Ian King; +Cc: xfs

On 09/11/2012 05:48 AM, Colin Ian King wrote:
> Hi,
> 
> I've seeing really slow I/O writes on xfs when doing a dd with a seek
> offset to a file on an xfs file system which is loop mounted.
> 
> Reproduced on Linux 3.6.0-rc5 and 3.4
> 
> How to reproduce:
> 
> dd if=/dev/zero of=xfs.img bs=1M count=1024
> mkfs.xfs -f xfs.img
> sudo mount -o loop -t xfs xfs.img /mnt/test
> 
> First create a large file, write performance is excellent:
> 
> sudo dd if=/dev/zero of=/mnt/test/big bs=1M count=500
> 500+0 records in
> 500+0 records out
> 524288000 bytes (524 MB) copied, 1.69451 s, 309 MB/s
> 
> ..next seek and write some more blocks, write performance is poor:
> 
> sudo dd if=/dev/zero of=/mnt/test/big obs=4K count=8192 seek=131072
> 8192+0 records in
> 1024+0 records out
> 4194304 bytes (4.2 MB) copied, 47.0644 s, 89.1 kB/s
> 

Hi Colin,

I reproduced this behavior with a 1GB filesystem on a loop device. I
think the problem you're seeing could be circumstantial to the fact that
you're writing to a point where you are close to filling the fs.

Taking a look at the tracepoint data when running your second dd alone
vs. in succession to the first, I see a fairly clean
buffered_write->get_blocks pattern vs. a
buffered_write->enospc->log_force->get_blocks pattern.

In other words, you're triggering an internal space allocation failure
and flush sequence intended to free up space. Somebody else might be
able to chime in and more ably explain why that occurs following a
truncate (perhaps the space isn't freed until the change hits the log),
but regardless this doesn't seem to occur if you increase the size of
the fs.

Brian

> Using blktrace and seektracer I've captured the I/O on the block device
> containing the xfs.img and I'm seeing ~55-70 seeks per second during the
> slow writes, which seems excessive.
> 
> I can reproduce this on hardware with 1, 4 or 8 CPUs.
> 
> I've testing this with other file systems I and don't see this issue, so
> it looks like an xfs + loop mounted issue.
> 
> Is this a known performance "feature"?
> 
> Colin
> 
> _______________________________________________
> xfs mailing list
> xfs@oss.sgi.com
> http://oss.sgi.com/mailman/listinfo/xfs

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: slow xfs writes on loopback mounted xfs with dd + seek
  2012-09-11 14:52 ` Brian Foster
@ 2012-09-11 20:50   ` Dave Chinner
  2012-09-11 21:00     ` Colin Ian King
  0 siblings, 1 reply; 4+ messages in thread
From: Dave Chinner @ 2012-09-11 20:50 UTC (permalink / raw)
  To: Brian Foster; +Cc: Colin Ian King, xfs

On Tue, Sep 11, 2012 at 10:52:23AM -0400, Brian Foster wrote:
> On 09/11/2012 05:48 AM, Colin Ian King wrote:
> > Hi,
> > 
> > I've seeing really slow I/O writes on xfs when doing a dd with a seek
> > offset to a file on an xfs file system which is loop mounted.
> > 
> > Reproduced on Linux 3.6.0-rc5 and 3.4
> > 
> > How to reproduce:
> > 
> > dd if=/dev/zero of=xfs.img bs=1M count=1024
> > mkfs.xfs -f xfs.img
> > sudo mount -o loop -t xfs xfs.img /mnt/test
> > 
> > First create a large file, write performance is excellent:
> > 
> > sudo dd if=/dev/zero of=/mnt/test/big bs=1M count=500
> > 500+0 records in
> > 500+0 records out
> > 524288000 bytes (524 MB) copied, 1.69451 s, 309 MB/s
> > 
> > ..next seek and write some more blocks, write performance is poor:
> > 
> > sudo dd if=/dev/zero of=/mnt/test/big obs=4K count=8192 seek=131072
> > 8192+0 records in
> > 1024+0 records out
> > 4194304 bytes (4.2 MB) copied, 47.0644 s, 89.1 kB/s
> > 
> 
> Hi Colin,
> 
> I reproduced this behavior with a 1GB filesystem on a loop device. I
> think the problem you're seeing could be circumstantial to the fact that
> you're writing to a point where you are close to filling the fs.
> 
> Taking a look at the tracepoint data when running your second dd alone
> vs. in succession to the first, I see a fairly clean
> buffered_write->get_blocks pattern vs. a
> buffered_write->enospc->log_force->get_blocks pattern.
> 
> In other words, you're triggering an internal space allocation failure
> and flush sequence intended to free up space. Somebody else might be
> able to chime in and more ably explain why that occurs following a
> truncate (perhaps the space isn't freed until the change hits the log),
> but regardless this doesn't seem to occur if you increase the size of
> the fs.

The space is considered "busy" and won't be reused until the
truncate transaction hits the log and the space is free on disk. See
xfs_busy_extent.c

Basically, testing XFS performance on tiny filesystems is going to
show false behaviours. XFS is optimised for large filesystems and
will typically shows low space artifacts on small filesystems,
especially when you are doing things like filling most of the free
filesystem space with 1 file.

e.g.  1GB free on at 100TB filesystem will throttle behaviours (say
speculative preallocation) much more effectively because itis within
1% of ENOSPC. That same 1GB free on a 1GB filesystem won't throttle
preallocation at all, and so that one file when it reaches a little
over 500MB will try to preallocate half the remaining space in the
filesystem because the filesystem is only 50% full....

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: slow xfs writes on loopback mounted xfs with dd + seek
  2012-09-11 20:50   ` Dave Chinner
@ 2012-09-11 21:00     ` Colin Ian King
  0 siblings, 0 replies; 4+ messages in thread
From: Colin Ian King @ 2012-09-11 21:00 UTC (permalink / raw)
  To: xfs

On 11/09/12 21:50, Dave Chinner wrote:
> On Tue, Sep 11, 2012 at 10:52:23AM -0400, Brian Foster wrote:
>> On 09/11/2012 05:48 AM, Colin Ian King wrote:
>>> Hi,
>>>
>>> I've seeing really slow I/O writes on xfs when doing a dd with a seek
>>> offset to a file on an xfs file system which is loop mounted.
>>>
>>> Reproduced on Linux 3.6.0-rc5 and 3.4
>>>
>>> How to reproduce:
>>>
>>> dd if=/dev/zero of=xfs.img bs=1M count=1024
>>> mkfs.xfs -f xfs.img
>>> sudo mount -o loop -t xfs xfs.img /mnt/test
>>>
>>> First create a large file, write performance is excellent:
>>>
>>> sudo dd if=/dev/zero of=/mnt/test/big bs=1M count=500
>>> 500+0 records in
>>> 500+0 records out
>>> 524288000 bytes (524 MB) copied, 1.69451 s, 309 MB/s
>>>
>>> ..next seek and write some more blocks, write performance is poor:
>>>
>>> sudo dd if=/dev/zero of=/mnt/test/big obs=4K count=8192 seek=131072
>>> 8192+0 records in
>>> 1024+0 records out
>>> 4194304 bytes (4.2 MB) copied, 47.0644 s, 89.1 kB/s
>>>
>>
>> Hi Colin,
>>
>> I reproduced this behavior with a 1GB filesystem on a loop device. I
>> think the problem you're seeing could be circumstantial to the fact that
>> you're writing to a point where you are close to filling the fs.
>>
>> Taking a look at the tracepoint data when running your second dd alone
>> vs. in succession to the first, I see a fairly clean
>> buffered_write->get_blocks pattern vs. a
>> buffered_write->enospc->log_force->get_blocks pattern.
>>
>> In other words, you're triggering an internal space allocation failure
>> and flush sequence intended to free up space. Somebody else might be
>> able to chime in and more ably explain why that occurs following a
>> truncate (perhaps the space isn't freed until the change hits the log),
>> but regardless this doesn't seem to occur if you increase the size of
>> the fs.
>
> The space is considered "busy" and won't be reused until the
> truncate transaction hits the log and the space is free on disk. See
> xfs_busy_extent.c
>
> Basically, testing XFS performance on tiny filesystems is going to
> show false behaviours. XFS is optimised for large filesystems and
> will typically shows low space artifacts on small filesystems,
> especially when you are doing things like filling most of the free
> filesystem space with 1 file.
>
> e.g.  1GB free on at 100TB filesystem will throttle behaviours (say
> speculative preallocation) much more effectively because itis within
> 1% of ENOSPC. That same 1GB free on a 1GB filesystem won't throttle
> preallocation at all, and so that one file when it reaches a little
> over 500MB will try to preallocate half the remaining space in the
> filesystem because the filesystem is only 50% full....
>

That's a really useful explanation. Thanks!
> Cheers,
>
> Dave.
>

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2012-09-11 20:59 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2012-09-11  9:48 slow xfs writes on loopback mounted xfs with dd + seek Colin Ian King
2012-09-11 14:52 ` Brian Foster
2012-09-11 20:50   ` Dave Chinner
2012-09-11 21:00     ` Colin Ian King

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox