From: Thomas Glanzmann <thomas@glanzmann.de>
To: Sitsofe Wheeler <sitsofe@gmail.com>
Cc: fio@vger.kernel.org
Subject: Re: Evenly distribute jobs and iodepth over a 1 TiB device so that every byte is written to in parallel
Date: Thu, 17 Jul 2025 09:52:04 +0200 [thread overview]
Message-ID: <aHirpCGIIZlhlVfs@glanzmann.de> (raw)
In-Reply-To: <CALjAwxiPOrgYLj2oyOeupgtk2mr7L3xMVdZYLjyRavsPqj==Qg@mail.gmail.com>
Hello Sitsofe,
* Sitsofe Wheeler <sitsofe@gmail.com> [2025-07-15 22:44]:
> (https://fio.readthedocs.io/en/latest/fio_doc.html#cmdoption-arg-scramble_buffers)
> which may be enough but you would have to check. The trick would be to
> see what happens with a single stream as that's easier to reason
> about.
I see, I stuck with refill_buffers because it was good enough.
> https://fio.readthedocs.io/en/latest/fio_doc.html#cmdoption-arg-offset_increment
> and size https://fio.readthedocs.io/en/latest/fio_doc.html#cmdoption-arg-size
Thank you that resolved my problem. I used the following command line:
fio --ioengine=libaio --refill_buffers --offset=0 --offset_increment=26G \
--size=25G --ramp_time=2s --numjobs=80 --direct=1 --verify=0 \
--randrepeat=0 --group_reporting --filename /dev/nvme0n1 --name=1mhqd \
--blocksize=1m --iodepth=3 --readwrite=write
> Do you get similar results in terms of space used with a single fio
> stream? Start small and then work your way up!
Yes, that works, but I also wanted to benchmark parallel performacne.
Also the single fio stream takes an hour while the parallel one only
takes 8,5 minutes.
> But perhaps you're thinking of device queue depths?
Yeah, that I was looking for Keith Bush answered me. The commands I was
looking for, are:
# How many IO queues are there:
ls -1 /sys/block/nvme0n1/mq/ | wc -l
# How large is each IO queue:
cat /sys/block/nvme0n1/queue/nr_requests
> Given you're running 40 jobs I'd be surprised if you can hit a depth
> of over 1000 per job (that would be over 65000 I/Os in total) without
> some serious tuning. You may want to look at
> /sys/block/[disk]/queue/nr_requests (see
> https://www.kernel.org/doc/Documentation/block/queue-sysfs.rst ) and
> /sys/block/[disk]/device/queue_depth but you may also find you run
> into libaio limits...
I could not. The netapp only has 8 * 128 queue depth, so now that I knew
the exact values I adopted the same.
Cheers,
Thomas
prev parent reply other threads:[~2025-07-17 7:52 UTC|newest]
Thread overview: 3+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-07-15 5:17 Evenly distribute jobs and iodepth over a 1 TiB device so that every byte is written to in parallel Thomas Glanzmann
2025-07-15 20:44 ` Sitsofe Wheeler
2025-07-17 7:52 ` Thomas Glanzmann [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=aHirpCGIIZlhlVfs@glanzmann.de \
--to=thomas@glanzmann.de \
--cc=fio@vger.kernel.org \
--cc=sitsofe@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox