From: Ankit Jain <jankit@suse.de>
To: Dave Chinner <david@fromorbit.com>
Cc: Al Viro <viro@zeniv.linux.org.uk>,
bcrl@kvack.org, linux-fsdevel@vger.kernel.org,
linux-aio@kvack.org, linux-kernel@vger.kernel.org,
Jan Kara <jack@suse.cz>
Subject: Re: [RFC][PATCH] Make io_submit non-blocking
Date: Thu, 26 Jul 2012 01:42:55 +0530 [thread overview]
Message-ID: <50105347.3050308@suse.de> (raw)
In-Reply-To: <20120724223110.GQ23387@dastard>
[-- Attachment #1: Type: text/plain, Size: 4446 bytes --]
On 07/25/2012 04:01 AM, Dave Chinner wrote:
> On Tue, Jul 24, 2012 at 05:11:05PM +0530, Ankit Jain wrote:
[snip]
>> **Unpatched**
>> read : io=102120KB, bw=618740 B/s, iops=151 , runt=169006msec
>> slat (usec): min=275 , max=87560 , avg=6571.88, stdev=2799.57
>
> Hmmm, I had to check the numbers twice - that's only 600KB/s.
>
> Perhaps you need to test on something more than a single piece of
> spinning rust. Optimising AIO for SSD rates (say 100k 4k write IOPS)
> is probably more relevant to the majority of AIO users....
I tested with a ramdisk to "simulate" a fast disk and had attached the
results. I'll try to get hold of a SSD and then test with that also.
Meanwhile, I ran the tests again, with ext3/ext4/xfs/btrfs and not sure
what I had screwed up when I did that previous test, but the numbers
look proper (as I was getting in my earlier testing) now:
For disk, I tested on a separate partition formatted with the fs, and
then run fio on it, with 1 job. Here "Old" is 3.5-rc7 (918227b).
------ disk -------
====== ext3 ======
submit latencies(usec)
B/w iops runtime min max avg std dev
ext3-read :
Old: 453352 B/s 110 231050msec 3 283048 170.28 5183.28
New: 451298 B/s 110 232555msec 0 444 8.18 7.95
ext3-write:
Old: 454309 B/s 110 231050msec 2 304614 232.72 6549.82
New: 450488 B/s 109 232555msec 0 233 7.94 7.23
====== ext4 ======
ext4-read :
Old: 459824 B/s 112 228635msec 2 260051 121.40 3569.78
New: 422700 B/s 103 247097msec 0 165 8.18 7.87
ext4-write:
Old: 457424 B/s 111 228635msec 3 312958 166.75 4616.58
New: 426015 B/s 104 247097msec 0 169 8.00 8.08
====== xfs ======
xfs-read :
Old: 467330 B/s 114 224516msec 3 272 46.45 25.35
New: 417049 B/s 101 252262msec 0 165 7.84 7.87
xfs-write:
Old: 466746 B/s 113 224516msec 3 265 52.52 28.13
New: 414289 B/s 101 252262msec 0 143 7.58 7.66
====== btrfs ======
btrfs-read :
Old: 1027.1KB/s 256 99918msec 5 84457 62.15 527.24
New: 1054.5KB/s 263 97542msec 0 121 9.72 7.05
btrfs-write:
Old: 1021.8KB/s 255 99918msec 10 139473 84.96 899.99
New: 1045.2KB/s 261 97542msec 0 248 9.55 7.02
These are the figures with a ramdisk:
------ ramdisk -------
====== ext3 ======
submit latencies (usec)
B/w iops runtime min max avg std dev
ext3-read :
Old: 430312KB/s 107577 2026msec 1 7072 3.85 15.17
New: 491251KB/s 122812 1772msec 0 22 0.39 0.52
ext3-write:
Old: 428918KB/s 107229 2026msec 2 61 3.46 0.85
New: 491142KB/s 122785 1772msec 0 62 0.43 0.55
====== ext4 ======
ext4-read :
Old: 466132KB/s 116532 1869msec 2 133 3.66 1.04
New: 542337KB/s 135584 1607msec 0 67 0.40 0.54
ext4-write:
Old: 465276KB/s 116318 1869msec 2 127 2.96 0.94
New: 540923KB/s 135230 1607msec 0 73 0.43 0.55
====== xfs ======
xfs-read :
Old: 485556KB/s 121389 1794msec 2 160 3.58 1.22
New: 581477KB/s 145369 1495msec 0 19 0.39 0.51
xfs-write:
Old: 484789KB/s 121197 1794msec 1 87 2.68 0.99
New: 582938KB/s 145734 1495msec 0 56 0.43 0.55
====== btrfs ======
I had trouble with btrfs on a ramdisk though, it complained about space
during preallocation. This was with a 4gig ramdisk and fio set to write
1700mb file, so these numbers are from that partial run. Btrfs ran fine
on a regular disk though.
btrfs-read :
Old: 107519KB/s 26882 2579msec 13 1492 17.03 9.23
New: 109878KB/s 27469 4665msec 0 29 0.45 0.55
btrfs-write:
Old: 108047KB/s 27020 2579msec 1 64963 17.21 823.88
New: 109413KB/s 27357 4665msec 0 32 0.48 0.56
Also, I dropped caches ("echo 3 > /proc/vm/sys/drop_cache") and sync'ed
before running each test. All the fio log files are attached.
Any suggestions on how I might test this better, other than the SSD
suggestion ofcourse.
[snip]
> Also, you added a memory allocation in the io submit code. Worse
> case latency will still be effectively undefined - what happens to
> latencies if you generate memory pressure while the test is running?
I'll try to fix this.
--
Ankit Jain
SUSE Labs
[-- Attachment #2: fio-logs.tgz --]
[-- Type: application/x-compressed-tar, Size: 11198 bytes --]
next prev parent reply other threads:[~2012-07-25 20:12 UTC|newest]
Thread overview: 11+ messages / expand[flat|nested] mbox.gz Atom feed top
2012-07-24 11:41 [RFC][PATCH] Make io_submit non-blocking Ankit Jain
2012-07-24 12:34 ` Rajat Sharma
2012-07-24 20:27 ` Theodore Ts'o
2012-07-24 22:31 ` Dave Chinner
2012-07-24 22:50 ` Christoph Hellwig
2012-07-24 23:08 ` Zach Brown
2012-07-26 19:52 ` Ankit Jain
2012-07-26 21:43 ` Zach Brown
2012-07-25 20:12 ` Ankit Jain [this message]
2012-07-24 22:37 ` Zach Brown
2012-07-25 20:17 ` Ankit Jain
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=50105347.3050308@suse.de \
--to=jankit@suse.de \
--cc=bcrl@kvack.org \
--cc=david@fromorbit.com \
--cc=jack@suse.cz \
--cc=linux-aio@kvack.org \
--cc=linux-fsdevel@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=viro@zeniv.linux.org.uk \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).