From: "Theodore Tso" <tytso@mit.edu>
To: Hans Holmberg <Hans.Holmberg@wdc.com>
Cc: "lsf-pc@lists.linux-foundation.org"
<lsf-pc@lists.linux-foundation.org>,
"linux-fsdevel@vger.kernel.org" <linux-fsdevel@vger.kernel.org>,
Damien Le Moal <Damien.LeMoal@wdc.com>, hch <hch@lst.de>,
Johannes Thumshirn <Johannes.Thumshirn@wdc.com>,
Naohiro Aota <Naohiro.Aota@wdc.com>,
"josef@toxicpanda.com" <josef@toxicpanda.com>,
"jack@suse.com" <jack@suse.com>,
Shinichiro Kawasaki <shinichiro.kawasaki@wdc.com>
Subject: Re: [LSF/MM/BPF TOPIC] A common project for file system performance testing
Date: Wed, 18 Feb 2026 10:31:08 -0500 [thread overview]
Message-ID: <20260218153108.GE45984@macsyma-wired.lan> (raw)
In-Reply-To: <b9f6cd20-8f0f-48d6-9819-e0c915206a3f@wdc.com>
I think this is definitely an interesting topic. One thing that I
think we should consider is requirements (or feature requests, if
we're talking about an existing code base) to make it easier to run
performance testing.
A) Separate out the building of the benchamrks from the running of
said benchmarks. My pattern is to build a test appliance which
can be uploaded to the system under test (SUT) which has all of
the necessary dependencies that might be used (e.g., fio, dbench,
etc.) precompiled. The SUT might be a VM, or a device where
running a compiler is prohibited by security policy (e.g., a
machine in a data center), or a device which doesn't have a
compiler installed, and/or where running the compiler would be
slow and painful (e.g., an Android device).
B) Separate out fetching the benchmark components from the building.
This might be because an enterprise might have local changes, so
they want to use a version of these tools from a local repo. It
also could be that security policy prohibits downloading software
from the network in an automated process, and there is a
requirement that any softare to be built in the build environment
has to reviewed by one or more human beings.
C) A modular way of storing the results. I like to run my file system
tests in a VM, which is deleted as soon as the test run is
completed. This significantly reduces the cost since the cost of
the VM is only paid when a test is active. But that means that the
performance runs should not be assumed to be stored on the local
file system where the benchmarks are run, but instead, the results
should ideally be stored in some kind of flat file (ala Junit and
Kunit files) which can then be collated in some kind of centralized
store.
D) A standardized way of specifying the hardware configuration of the
SUT. This might include using VM's hosted at a hyperscale because
of the cost advantage, and because very often, the softare defined
storage in cloud VM's don't necessarily act like traditional HDD's
or flash devices.)
I'll note that one of the concerns of running performance tests using
a VM is the noisy neighbor problem. That is, what if the behavior of
other VM's on the host affects the performance of the test VM? This
may vary depending on whether CPU or memory is subject to
overprovisioning (which may vary depending on the VM type). There are
also VM types where all of the resources are dedicated to a single VM.
One thing that would be useful would be to have people running
benchmarks to run the exact same configuration (kernel version,
benchmark software versions, etc.) multiple times at different times
on the same VM type, so the variability of the benchmark results can
be measured.
Yes, this is a bit more work, but the benefits of using VM's, where
you don't have to maintain hardware, and deal with hard drive
failings, etc., means that some people might find the cost/benefits
tradeoffs to be appealing.
Cheers,
- Ted
next prev parent reply other threads:[~2026-02-18 15:31 UTC|newest]
Thread overview: 14+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-02-12 13:42 [LSF/MM/BPF TOPIC] A common project for file system performance testing Hans Holmberg
2026-02-12 14:31 ` Daniel Wagner
2026-02-13 11:50 ` Shinichiro Kawasaki
2026-02-12 16:42 ` Johannes Thumshirn
2026-02-12 17:32 ` Josef Bacik
2026-02-12 17:37 ` [Lsf-pc] " Amir Goldstein
2026-02-12 19:03 ` Josef Bacik
2026-02-13 9:13 ` Hans Holmberg
2026-02-13 6:59 ` Johannes Thumshirn
2026-02-16 10:10 ` Jan Kara
2026-02-17 8:13 ` Hans Holmberg
2026-02-18 15:31 ` Theodore Tso [this message]
2026-02-20 8:59 ` Hans Holmberg
2026-02-23 13:26 ` Johannes Thumshirn
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20260218153108.GE45984@macsyma-wired.lan \
--to=tytso@mit.edu \
--cc=Damien.LeMoal@wdc.com \
--cc=Hans.Holmberg@wdc.com \
--cc=Johannes.Thumshirn@wdc.com \
--cc=Naohiro.Aota@wdc.com \
--cc=hch@lst.de \
--cc=jack@suse.com \
--cc=josef@toxicpanda.com \
--cc=linux-fsdevel@vger.kernel.org \
--cc=lsf-pc@lists.linux-foundation.org \
--cc=shinichiro.kawasaki@wdc.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox