* XFS performance tracking and regression monitoring
@ 2008-10-23 23:29 Mark Goodwin
2008-10-24 3:54 ` Dave Chinner
0 siblings, 1 reply; 4+ messages in thread
From: Mark Goodwin @ 2008-10-23 23:29 UTC (permalink / raw)
To: xfs-oss
We're about to deploy a system+jbod dedicated for performance
regression tracking. The idea is to build the XFS dev branch
nightly, run a bunch of self contained benchmarks, and generate
a progressive daily report - date on the X-axis, with (perhaps)
wallclock runtime on the y-axis.
The aim is to track relative XFS performance on a daily basis
for various workloads on identical h/w. If each workload runs for
approx the same duration, the reports can all share the same
generic y-axis. THe long term trend should have a positive
gradient. Regressions can be date correlated with commits.
Comments, benchmark suggestions? ANyone already running this?
Know of a test harness and/or report generator? Or will we
just roll our own - seems conceptually fairly simple.
Thanks
--
Mark Goodwin markgw@sgi.com
Engineering Manager for XFS and PCP Phone: +61-3-99631937
SGI Australian Software Group Cell: +61-4-18969583
-------------------------------------------------------------
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: XFS performance tracking and regression monitoring
2008-10-23 23:29 XFS performance tracking and regression monitoring Mark Goodwin
@ 2008-10-24 3:54 ` Dave Chinner
2008-10-24 7:12 ` Mark Goodwin
2008-10-24 15:37 ` Eric Sandeen
0 siblings, 2 replies; 4+ messages in thread
From: Dave Chinner @ 2008-10-24 3:54 UTC (permalink / raw)
To: Mark Goodwin; +Cc: xfs-oss
On Fri, Oct 24, 2008 at 09:29:42AM +1000, Mark Goodwin wrote:
> We're about to deploy a system+jbod dedicated for performance
> regression tracking. The idea is to build the XFS dev branch
> nightly, run a bunch of self contained benchmarks, and generate
> a progressive daily report - date on the X-axis, with (perhaps)
> wallclock runtime on the y-axis.
wallclock runtime is not indicative of relative performance
for many benchmarks. e.g. dbench runs for a fixed time and
then gives a throughput number as it's output. It's the throughput
you want to compare.....
> The aim is to track relative XFS performance on a daily basis
> for various workloads on identical h/w. If each workload runs for
> approx the same duration, the reports can all share the same
> generic y-axis. THe long term trend should have a positive
> gradient.
If you are measuring walltime, then you should see a negative
gradient as an indication of improvement....
> Regressions can be date correlated with commits.
For the benchmarks to be useful as regression tests, then the
harness really needs to be profiling and gathering statistics at the
same time so that we might be able to determine what caused the
regression...
> Comments, benchmark suggestions?
The usual set - bonnie++, postmark, ffsb, fio, sio, etc.
Then some artificial tests that stress scalability like speed of
creating 1m small files with long names in a directory, the speed of
a cold cache read of the directory, the speed of a hot-cache read of
the directory, time to stat all the files (cold and hot cache),
time to remove all the files, etc. And then how well it scales
as you do this with more threads and directories in parallel...
> ANyone already running this?
> Know of a test harness and/or report generator?
Perhap you might want to look more closely at FFSB - it has a
fairly interesting automated test harness. e.g. it was used to
produce these:
http://btrfs.boxacle.net/
And you can probably set up custom workloads to cover all the things
that the standard benchmarks do.....
Cheers,
Dave.
--
Dave Chinner
david@fromorbit.com
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: XFS performance tracking and regression monitoring
2008-10-24 3:54 ` Dave Chinner
@ 2008-10-24 7:12 ` Mark Goodwin
2008-10-24 15:37 ` Eric Sandeen
1 sibling, 0 replies; 4+ messages in thread
From: Mark Goodwin @ 2008-10-24 7:12 UTC (permalink / raw)
To: Mark Goodwin, xfs-oss
Dave Chinner wrote:
> On Fri, Oct 24, 2008 at 09:29:42AM +1000, Mark Goodwin wrote:
>> We're about to deploy a system+jbod dedicated for performance
>> regression tracking. The idea is to build the XFS dev branch
>> nightly, run a bunch of self contained benchmarks, and generate
>> a progressive daily report - date on the X-axis, with (perhaps)
>> wallclock runtime on the y-axis.
>
> wallclock runtime is not indicative of relative performance
> for many benchmarks. e.g. dbench runs for a fixed time and
> then gives a throughput number as it's output. It's the throughput
> you want to compare.....
either, or. Both are differential. I want to keep this really simple,
just provide high level tracking on *when* a performance regression
may have been introduced but only with broad indicators. I don't
think anyone is regularly tracking this for XFS and we should be.
>> The aim is to track relative XFS performance on a daily basis
>> for various workloads on identical h/w. If each workload runs for
>> approx the same duration, the reports can all share the same
>> generic y-axis. THe long term trend should have a positive
>> gradient.
>
> If you are measuring walltime, then you should see a negative
> gradient as an indication of improvement....
yes :) what I ment, but was thinking "positively"
>> Regressions can be date correlated with commits.
>
> For the benchmarks to be useful as regression tests, then the
> harness really needs to be profiling and gathering statistics at the
> same time so that we might be able to determine what caused the
> regression...
I would regard that as follow-up once an issue has been identified.
My proposal is too simple to be useful for diagnosis, but it should
be enough to provide heads-up. That's the aim to start with. The same
h/w can also be set up for more sophisticated measurements in the
longer term.
>> Comments, benchmark suggestions?
>
> The usual set - bonnie++, postmark, ffsb, fio, sio, etc.
>
> Then some artificial tests that stress scalability like speed of
> creating 1m small files with long names in a directory, the speed of
> a cold cache read of the directory, the speed of a hot-cache read of
> the directory, time to stat all the files (cold and hot cache),
> time to remove all the files, etc. And then how well it scales
> as you do this with more threads and directories in parallel...
yeah OK, bits and pieces of the the above, enough to provide broad
heads-up.
>> ANyone already running this?
>> Know of a test harness and/or report generator?
>
> Perhap you might want to look more closely at FFSB - it has a
> fairly interesting automated test harness. e.g. it was used to
> produce these:
>
> http://btrfs.boxacle.net/
>
> And you can probably set up custom workloads to cover all the things
> that the standard benchmarks do.....
I'll poke around on those pages for some ideas.
Thanks for the reply.
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: XFS performance tracking and regression monitoring
2008-10-24 3:54 ` Dave Chinner
2008-10-24 7:12 ` Mark Goodwin
@ 2008-10-24 15:37 ` Eric Sandeen
1 sibling, 0 replies; 4+ messages in thread
From: Eric Sandeen @ 2008-10-24 15:37 UTC (permalink / raw)
To: Mark Goodwin, xfs-oss
Dave Chinner wrote:
> Perhap you might want to look more closely at FFSB - it has a
> fairly interesting automated test harness. e.g. it was used to
> produce these:
>
> http://btrfs.boxacle.net/
>
> And you can probably set up custom workloads to cover all the things
> that the standard benchmarks do.....
I was going to suggest that too, those are some nifty charts. :)
ffsb takes workload recipes so you can make it do a large variety of
things...
-Eric
^ permalink raw reply [flat|nested] 4+ messages in thread
end of thread, other threads:[~2008-10-24 15:38 UTC | newest]
Thread overview: 4+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2008-10-23 23:29 XFS performance tracking and regression monitoring Mark Goodwin
2008-10-24 3:54 ` Dave Chinner
2008-10-24 7:12 ` Mark Goodwin
2008-10-24 15:37 ` Eric Sandeen
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox