public inbox for linux-btrfs@vger.kernel.org
 help / color / mirror / Atom feed
From: Steven Pratt <slpratt@austin.ibm.com>
To: Chris Mason <chris.mason@oracle.com>
Cc: linux-btrfs <linux-btrfs@vger.kernel.org>
Subject: Re: More random write performance data
Date: Thu, 09 Apr 2009 16:41:30 -0500	[thread overview]
Message-ID: <49DE6B8A.1010801@austin.ibm.com> (raw)
In-Reply-To: <1239232172.31826.1.camel@think.oraclecorp.com>

Chris Mason wrote:
> On Wed, 2009-04-08 at 16:38 -0500, Steven Pratt wrote:
>   
>> Given the anomalies we were seeing on random write workloads, I decided 
>> to simplify the test and do single threaded odirect random write.  This 
>> should eliminate the locking issue as well as any pdflush bursty 
>> behavior.  What I got was not quite what I expected.
>>
>> The most interesting graph is probably #12,  DM write throughput.  We 
>> see a baseline of ~7MB/sec with spikes every 30 seconds.  I assume the 
>> spike are meta data related as the io is being done from user space at a 
>> steady constant rate.  The really odd thing is that for the entire 
>> almost 2 hour duration, the amplitude of the spike continues to climb, 
>> meaning the amount of meta data need to be flushed to disk is ever 
>> increasing.
>>
>> http://btrfs.boxacle.net/repository/raid/longrun/btrfs-longrun-1thread/btrfs1.ffsb.random_writes__threads_0001.09-04-08_13.05.54/analysis/iostat-processed.001/chart.html
>>
>> Looking at graph #8 DM IO/sec, we see that there is even a pattern 
>> within the pattern of spikes.  It # of IOs in each spike appears to 
>> change at each interval and repeats over a set of 7, 30 second intervals.
>>
>> Also, we see that we average 12MB/sec of data written out, for 5MB/sec 
>> of benchmark throughput.
>>
>> I have queued up a run without checksums and cow to see how much this 
>> overhead is reduced.
>>     
>
> Really interesting, thanks Steve.
>
> I'll have to run it at home next week, but I think the high metadata
> writeback is related to updating backrefs on the extent allocation tree.
>   
Well, looks like you are correct.  Using nodatacow has virtually 
eliminated the extra writes.  I is also responsible for a whopping 40x 
increase in multi threaded random write performance! (2.5MB/sec -> 
95MB/sec).  See complete details in the new history graphs which I have 
updated with a new baseline, a run with no csums, and a run with no 
csums and no cow.

http://btrfs.boxacle.net/repository/raid/history/History.html

nocow make massive differences on the random write workloads, while no 
csums help the heavily threaded sequential workloads (sequential read 
and create).

Steve

> Most of the reads during the random write are from the same thing.  So,
> we're experimenting with changes on that end as well.
>
> -chris
>
>   


  reply	other threads:[~2009-04-09 21:41 UTC|newest]

Thread overview: 4+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2009-04-08 21:38 More random write performance data Steven Pratt
2009-04-08 23:09 ` Chris Mason
2009-04-09 21:41   ` Steven Pratt [this message]
2009-04-09 22:21     ` Chris Mason

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=49DE6B8A.1010801@austin.ibm.com \
    --to=slpratt@austin.ibm.com \
    --cc=chris.mason@oracle.com \
    --cc=linux-btrfs@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox