public inbox for linux-btrfs@vger.kernel.org
 help / color / mirror / Atom feed
* More random write performance data
@ 2009-04-08 21:38 Steven Pratt
  2009-04-08 23:09 ` Chris Mason
  0 siblings, 1 reply; 4+ messages in thread
From: Steven Pratt @ 2009-04-08 21:38 UTC (permalink / raw)
  To: linux-btrfs

Given the anomalies we were seeing on random write workloads, I decided 
to simplify the test and do single threaded odirect random write.  This 
should eliminate the locking issue as well as any pdflush bursty 
behavior.  What I got was not quite what I expected.

The most interesting graph is probably #12,  DM write throughput.  We 
see a baseline of ~7MB/sec with spikes every 30 seconds.  I assume the 
spike are meta data related as the io is being done from user space at a 
steady constant rate.  The really odd thing is that for the entire 
almost 2 hour duration, the amplitude of the spike continues to climb, 
meaning the amount of meta data need to be flushed to disk is ever 
increasing.

http://btrfs.boxacle.net/repository/raid/longrun/btrfs-longrun-1thread/btrfs1.ffsb.random_writes__threads_0001.09-04-08_13.05.54/analysis/iostat-processed.001/chart.html

Looking at graph #8 DM IO/sec, we see that there is even a pattern 
within the pattern of spikes.  It # of IOs in each spike appears to 
change at each interval and repeats over a set of 7, 30 second intervals.

Also, we see that we average 12MB/sec of data written out, for 5MB/sec 
of benchmark throughput.

I have queued up a run without checksums and cow to see how much this 
overhead is reduced.

Steve


^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2009-04-09 22:21 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2009-04-08 21:38 More random write performance data Steven Pratt
2009-04-08 23:09 ` Chris Mason
2009-04-09 21:41   ` Steven Pratt
2009-04-09 22:21     ` Chris Mason

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox