linux-btrfs.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* Space cache degradation
@ 2014-04-02 17:54 Justin Maggard
  2014-04-02 18:00 ` Chris Mason
  0 siblings, 1 reply; 2+ messages in thread
From: Justin Maggard @ 2014-04-02 17:54 UTC (permalink / raw)
  To: linux-btrfs

I've found that, after using some btrfs filesystems for some time,
that the first large write after a reboot takes a very long time.  So
I went to work trying out different test cases to simplify
reproduction of the issue, and I've got it down to just these steps:

1) mkfs.btrfs on a large-ish device.  I used a 14TB MD RAID5 device.
2) Fill it up a bit over half-way with ~5MB files.  In my test I made
30 copies of a 266GB data set consisting of 52,356 files and 20,268
folders.
3) umount
4) mount
5) time fallocate -l 2G /mount/point/2G.dat
real 3m9.412s
user 0m0.002s
sys 0m2.939s

By comparison, if I don't use space cache things go much better:
# umount
# mount -o nospace_cache
# time fallocate -l 2G /mount/point/2G.dat
real 0m15.982s
user 0m0.002s
sys 0m0.103s

If I use the clear_cache mount option, that also resolves the slowness.

Is this a known issue?  For me it's 100% reproducible, on various
kernel versions including 3.14-rc8.  Is there anything I should
provide to help debug?

-Justin

^ permalink raw reply	[flat|nested] 2+ messages in thread

* Re: Space cache degradation
  2014-04-02 17:54 Space cache degradation Justin Maggard
@ 2014-04-02 18:00 ` Chris Mason
  0 siblings, 0 replies; 2+ messages in thread
From: Chris Mason @ 2014-04-02 18:00 UTC (permalink / raw)
  To: Justin Maggard, linux-btrfs



On 04/02/2014 01:54 PM, Justin Maggard wrote:
> I've found that, after using some btrfs filesystems for some time,
> that the first large write after a reboot takes a very long time.  So
> I went to work trying out different test cases to simplify
> reproduction of the issue, and I've got it down to just these steps:
>
> 1) mkfs.btrfs on a large-ish device.  I used a 14TB MD RAID5 device.
> 2) Fill it up a bit over half-way with ~5MB files.  In my test I made
> 30 copies of a 266GB data set consisting of 52,356 files and 20,268
> folders.
> 3) umount
> 4) mount
> 5) time fallocate -l 2G /mount/point/2G.dat
> real 3m9.412s
> user 0m0.002s
> sys 0m2.939s
>
> By comparison, if I don't use space cache things go much better:
> # umount
> # mount -o nospace_cache
> # time fallocate -l 2G /mount/point/2G.dat
> real 0m15.982s
> user 0m0.002s
> sys 0m0.103s
>
> If I use the clear_cache mount option, that also resolves the slowness.
>
> Is this a known issue?  For me it's 100% reproducible, on various
> kernel versions including 3.14-rc8.  Is there anything I should
> provide to help debug?
>

Neat, not a known issue.  What's probably happening is that without 
space cache on, you're jumping out into unused space while it 
regenerates the space cache.  Once the caching thread is done caching 
all the free space, you should go slowly again.

I'll try to reproduce.

-chris




^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2014-04-02 18:00 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2014-04-02 17:54 Space cache degradation Justin Maggard
2014-04-02 18:00 ` Chris Mason

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).