public inbox for linux-btrfs@vger.kernel.org
 help / color / mirror / Atom feed
* More random write performance data
@ 2009-04-08 21:38 Steven Pratt
  2009-04-08 23:09 ` Chris Mason
  0 siblings, 1 reply; 4+ messages in thread
From: Steven Pratt @ 2009-04-08 21:38 UTC (permalink / raw)
  To: linux-btrfs

Given the anomalies we were seeing on random write workloads, I decided 
to simplify the test and do single threaded odirect random write.  This 
should eliminate the locking issue as well as any pdflush bursty 
behavior.  What I got was not quite what I expected.

The most interesting graph is probably #12,  DM write throughput.  We 
see a baseline of ~7MB/sec with spikes every 30 seconds.  I assume the 
spike are meta data related as the io is being done from user space at a 
steady constant rate.  The really odd thing is that for the entire 
almost 2 hour duration, the amplitude of the spike continues to climb, 
meaning the amount of meta data need to be flushed to disk is ever 
increasing.

http://btrfs.boxacle.net/repository/raid/longrun/btrfs-longrun-1thread/btrfs1.ffsb.random_writes__threads_0001.09-04-08_13.05.54/analysis/iostat-processed.001/chart.html

Looking at graph #8 DM IO/sec, we see that there is even a pattern 
within the pattern of spikes.  It # of IOs in each spike appears to 
change at each interval and repeats over a set of 7, 30 second intervals.

Also, we see that we average 12MB/sec of data written out, for 5MB/sec 
of benchmark throughput.

I have queued up a run without checksums and cow to see how much this 
overhead is reduced.

Steve


^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: More random write performance data
  2009-04-08 21:38 More random write performance data Steven Pratt
@ 2009-04-08 23:09 ` Chris Mason
  2009-04-09 21:41   ` Steven Pratt
  0 siblings, 1 reply; 4+ messages in thread
From: Chris Mason @ 2009-04-08 23:09 UTC (permalink / raw)
  To: Steven Pratt; +Cc: linux-btrfs

On Wed, 2009-04-08 at 16:38 -0500, Steven Pratt wrote:
> Given the anomalies we were seeing on random write workloads, I decided 
> to simplify the test and do single threaded odirect random write.  This 
> should eliminate the locking issue as well as any pdflush bursty 
> behavior.  What I got was not quite what I expected.
> 
> The most interesting graph is probably #12,  DM write throughput.  We 
> see a baseline of ~7MB/sec with spikes every 30 seconds.  I assume the 
> spike are meta data related as the io is being done from user space at a 
> steady constant rate.  The really odd thing is that for the entire 
> almost 2 hour duration, the amplitude of the spike continues to climb, 
> meaning the amount of meta data need to be flushed to disk is ever 
> increasing.
> 
> http://btrfs.boxacle.net/repository/raid/longrun/btrfs-longrun-1thread/btrfs1.ffsb.random_writes__threads_0001.09-04-08_13.05.54/analysis/iostat-processed.001/chart.html
> 
> Looking at graph #8 DM IO/sec, we see that there is even a pattern 
> within the pattern of spikes.  It # of IOs in each spike appears to 
> change at each interval and repeats over a set of 7, 30 second intervals.
> 
> Also, we see that we average 12MB/sec of data written out, for 5MB/sec 
> of benchmark throughput.
> 
> I have queued up a run without checksums and cow to see how much this 
> overhead is reduced.

Really interesting, thanks Steve.

I'll have to run it at home next week, but I think the high metadata
writeback is related to updating backrefs on the extent allocation tree.

Most of the reads during the random write are from the same thing.  So,
we're experimenting with changes on that end as well.

-chris



^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: More random write performance data
  2009-04-08 23:09 ` Chris Mason
@ 2009-04-09 21:41   ` Steven Pratt
  2009-04-09 22:21     ` Chris Mason
  0 siblings, 1 reply; 4+ messages in thread
From: Steven Pratt @ 2009-04-09 21:41 UTC (permalink / raw)
  To: Chris Mason; +Cc: linux-btrfs

Chris Mason wrote:
> On Wed, 2009-04-08 at 16:38 -0500, Steven Pratt wrote:
>   
>> Given the anomalies we were seeing on random write workloads, I decided 
>> to simplify the test and do single threaded odirect random write.  This 
>> should eliminate the locking issue as well as any pdflush bursty 
>> behavior.  What I got was not quite what I expected.
>>
>> The most interesting graph is probably #12,  DM write throughput.  We 
>> see a baseline of ~7MB/sec with spikes every 30 seconds.  I assume the 
>> spike are meta data related as the io is being done from user space at a 
>> steady constant rate.  The really odd thing is that for the entire 
>> almost 2 hour duration, the amplitude of the spike continues to climb, 
>> meaning the amount of meta data need to be flushed to disk is ever 
>> increasing.
>>
>> http://btrfs.boxacle.net/repository/raid/longrun/btrfs-longrun-1thread/btrfs1.ffsb.random_writes__threads_0001.09-04-08_13.05.54/analysis/iostat-processed.001/chart.html
>>
>> Looking at graph #8 DM IO/sec, we see that there is even a pattern 
>> within the pattern of spikes.  It # of IOs in each spike appears to 
>> change at each interval and repeats over a set of 7, 30 second intervals.
>>
>> Also, we see that we average 12MB/sec of data written out, for 5MB/sec 
>> of benchmark throughput.
>>
>> I have queued up a run without checksums and cow to see how much this 
>> overhead is reduced.
>>     
>
> Really interesting, thanks Steve.
>
> I'll have to run it at home next week, but I think the high metadata
> writeback is related to updating backrefs on the extent allocation tree.
>   
Well, looks like you are correct.  Using nodatacow has virtually 
eliminated the extra writes.  I is also responsible for a whopping 40x 
increase in multi threaded random write performance! (2.5MB/sec -> 
95MB/sec).  See complete details in the new history graphs which I have 
updated with a new baseline, a run with no csums, and a run with no 
csums and no cow.

http://btrfs.boxacle.net/repository/raid/history/History.html

nocow make massive differences on the random write workloads, while no 
csums help the heavily threaded sequential workloads (sequential read 
and create).

Steve

> Most of the reads during the random write are from the same thing.  So,
> we're experimenting with changes on that end as well.
>
> -chris
>
>   


^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: More random write performance data
  2009-04-09 21:41   ` Steven Pratt
@ 2009-04-09 22:21     ` Chris Mason
  0 siblings, 0 replies; 4+ messages in thread
From: Chris Mason @ 2009-04-09 22:21 UTC (permalink / raw)
  To: Steven Pratt; +Cc: linux-btrfs

On Thu, 2009-04-09 at 16:41 -0500, Steven Pratt wrote:
> Chris Mason wrote:
> > On Wed, 2009-04-08 at 16:38 -0500, Steven Pratt wrote:
> >   
> >> Given the anomalies we were seeing on random write workloads, I decided 
> >> to simplify the test and do single threaded odirect random write.  This 
> >> should eliminate the locking issue as well as any pdflush bursty 
> >> behavior.  What I got was not quite what I expected.
> >>
> >> The most interesting graph is probably #12,  DM write throughput.  We 
> >> see a baseline of ~7MB/sec with spikes every 30 seconds.  I assume the 
> >> spike are meta data related as the io is being done from user space at a 
> >> steady constant rate.  The really odd thing is that for the entire 
> >> almost 2 hour duration, the amplitude of the spike continues to climb, 
> >> meaning the amount of meta data need to be flushed to disk is ever 
> >> increasing.
> >>
> >> http://btrfs.boxacle.net/repository/raid/longrun/btrfs-longrun-1thread/btrfs1.ffsb.random_writes__threads_0001.09-04-08_13.05.54/analysis/iostat-processed.001/chart.html
> >>
> >> Looking at graph #8 DM IO/sec, we see that there is even a pattern 
> >> within the pattern of spikes.  It # of IOs in each spike appears to 
> >> change at each interval and repeats over a set of 7, 30 second intervals.
> >>
> >> Also, we see that we average 12MB/sec of data written out, for 5MB/sec 
> >> of benchmark throughput.
> >>
> >> I have queued up a run without checksums and cow to see how much this 
> >> overhead is reduced.
> >>     
> >
> > Really interesting, thanks Steve.
> >
> > I'll have to run it at home next week, but I think the high metadata
> > writeback is related to updating backrefs on the extent allocation tree.
> >   
> Well, looks like you are correct.  Using nodatacow has virtually 
> eliminated the extra writes.  I is also responsible for a whopping 40x 
> increase in multi threaded random write performance! (2.5MB/sec -> 
> 95MB/sec).  See complete details in the new history graphs which I have 
> updated with a new baseline, a run with no csums, and a run with no 
> csums and no cow.
> 
> http://btrfs.boxacle.net/repository/raid/history/History.html
> 

Whoa.  So, we're on the right track, its good to know the btree locking
is scaling well enough for the main btree as well.

> nocow make massive differences on the random write workloads, while no 
> csums help the heavily threaded sequential workloads (sequential read 
> and create).

Ok, thanks again.

-chris



^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2009-04-09 22:21 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2009-04-08 21:38 More random write performance data Steven Pratt
2009-04-08 23:09 ` Chris Mason
2009-04-09 21:41   ` Steven Pratt
2009-04-09 22:21     ` Chris Mason

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox