* Re: File System Performance results
[not found] ` <4908C21D.5080302@austin.ibm.com>
@ 2008-10-29 21:49 ` Steven Pratt
0 siblings, 0 replies; 3+ messages in thread
From: Steven Pratt @ 2008-10-29 21:49 UTC (permalink / raw)
To: Chris Mason; +Cc: linux-fsdevel, linux-btrfs
Steven Pratt wrote:
> Chris Mason wrote:
>> On Wed, 2008-10-22 at 15:06 -0500, Steven Pratt wrote:
>>
>>> We have set up a new page which is intended mainly for tracking the
>>> performance of BTRFS, but in doing so we are testing other
>>> filesystems as well (ext3, ext4, xfs and jfs). Thought some people
>>> here might find the results useful.
>>>
>>
>> I think I understand the bad read performance in btrfs. I was forcing a
>> tiny max readahead size.
>>
>> The current git tree has fixes for it, along with a ton of new code.
>>
> Ok, we'll kick off some new runs.
>
> Also, just need to push out the data fro odirect random write and mail
> server with fsync.
> Steve
We have completed testing on the random write tests with odirect.
Results can be found at:
Single disk:
http://btrfs.boxacle.net/repository/single-disk/Oct28-odirect-random-writes/Oct28-odirect-random-writes.html
Raid:
http://btrfs.boxacle.net/repository/raid/Oct28-odirect-random-writes/Oct28-odirect-random-writes.html
Also, the changes to the mail server emulation to use fsync on the
creates is also complete:
Single disk:
http://btrfs.boxacle.net/repository/single-disk/Oct28-mail-server-fsync-creates/Oct28-mail-server-fsync-creates.html
Raid:
http://btrfs.boxacle.net/repository/raid/Oct28-mail-server-fsync-creates/mail-server-fsync-creates.html
Steve
>
>> -chris
>>
>>
>
> --
> To unsubscribe from this list: send the line "unsubscribe
> linux-fsdevel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: File System Performance results
[not found] ` <490F4D98.2050608@austin.ibm.com>
@ 2008-11-03 19:34 ` Chris Mason
2008-11-10 18:11 ` Chris Mason
1 sibling, 0 replies; 3+ messages in thread
From: Chris Mason @ 2008-11-03 19:34 UTC (permalink / raw)
To: Steven Pratt; +Cc: linux-btrfs
On Mon, 2008-11-03 at 13:14 -0600, Steven Pratt wrote:
> Chris Mason wrote:
> > On Wed, 2008-10-22 at 15:06 -0500, Steven Pratt wrote:
> >
> >> We have set up a new page which is intended mainly for tracking the
> >> performance of BTRFS, but in doing so we are testing other filesystems
> >> as well (ext3, ext4, xfs and jfs). Thought some people here might find
> >> the results useful.
> >>
> >
> > I think I understand the bad read performance in btrfs. I was forcing a
> > tiny max readahead size.
> >
> > The current git tree has fixes for it, along with a ton of new code.
> >
> Results for the the new (Git pull on 10/29) on the raid system are
> complete. Sequential read with a small number of threads has increased
> dramatically, however on large number of threads (128) we see a large
> dropoff in performance from before, as well as a huge spike in CPU
> utilization. A quick look at the oprofile reveals some new functions at
> the top which seem really out of place on a read only workload.
>
Thanks, we've got a ton of new code in there, and I'm working through
some performance testing as well.
> samples % image name app name symbol name
> 13752215 23.8658 btrfs.ko btrfs alloc_extent_state
> 12840571 22.2837 btrfs.ko btrfs free_extent_state
> 9658945 16.7623 vmlinux-2.6.27 vmlinux-2.6.27 crc32c_le
>
> Both of the extent_state function have overtaken the crc function at the
> top of the profile. Why would we be messing with extent states on read
> only workload?
It depends ;) We'll have to get a call graph of who is calling that.
-chris
^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: File System Performance results
[not found] ` <490F4D98.2050608@austin.ibm.com>
2008-11-03 19:34 ` Chris Mason
@ 2008-11-10 18:11 ` Chris Mason
1 sibling, 0 replies; 3+ messages in thread
From: Chris Mason @ 2008-11-10 18:11 UTC (permalink / raw)
To: Steven Pratt; +Cc: linux-btrfs
On Mon, 2008-11-03 at 13:14 -0600, Steven Pratt wrote:
> Chris Mason wrote:
> > On Wed, 2008-10-22 at 15:06 -0500, Steven Pratt wrote:
> >
> >> We have set up a new page which is intended mainly for tracking the
> >> performance of BTRFS, but in doing so we are testing other filesystems
> >> as well (ext3, ext4, xfs and jfs). Thought some people here might find
> >> the results useful.
> >>
> >
> > I think I understand the bad read performance in btrfs. I was forcing a
> > tiny max readahead size.
> >
> > The current git tree has fixes for it, along with a ton of new code.
> >
> Results for the the new (Git pull on 10/29) on the raid system are
> complete. Sequential read with a small number of threads has increased
> dramatically, however on large number of threads (128) we see a large
> dropoff in performance from before, as well as a huge spike in CPU
> utilization. A quick look at the oprofile reveals some new functions at
> the top which seem really out of place on a read only workload.
>
> samples % image name app name symbol name
> 13752215 23.8658 btrfs.ko btrfs alloc_extent_state
> 12840571 22.2837 btrfs.ko btrfs free_extent_state
It took a while, but I think I've tracked this down. Btrfs has some
debugging code to detect and cleanup leaks of the extent_state structs,
and this adds a lot of contention to the alloc_extent_state and
free_extent_state code.
I've pushed out fixes for this, along with some other important
optimizations that should improve btrfs scores in your benchmarks.
Could you please give things a try?
-chris
^ permalink raw reply [flat|nested] 3+ messages in thread
end of thread, other threads:[~2008-11-10 18:11 UTC | newest]
Thread overview: 3+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
[not found] <48FF87CE.2090502@austin.ibm.com>
[not found] ` <1225308408.6448.287.camel@think.oraclecorp.com>
[not found] ` <4908C21D.5080302@austin.ibm.com>
2008-10-29 21:49 ` File System Performance results Steven Pratt
[not found] ` <490F4D98.2050608@austin.ibm.com>
2008-11-03 19:34 ` Chris Mason
2008-11-10 18:11 ` Chris Mason
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox