From: Ming Lei <ming.lei@redhat.com>
To: Ming Lin <minggr@gmail.com>
Cc: linux-block@vger.kernel.org,
Linux FS Devel <linux-fsdevel@vger.kernel.org>,
ming.lei@redhat.com
Subject: Re: Performance Difference between ext4 and Raw Block Device Access with buffer_io
Date: Wed, 15 Nov 2023 17:19:28 +0800 [thread overview]
Message-ID: <ZVSNIClnCnmay8e6@fedora> (raw)
In-Reply-To: <CAF1ivSY-V+afUxfH7SDyM9vG991u7EoDCteL1y5jurnKSzQ3YA@mail.gmail.com>
On Mon, Nov 13, 2023 at 05:57:52PM -0800, Ming Lin wrote:
> Hi,
>
> We are currently conducting performance tests on an application that
> involves writing/reading data to/from ext4 or a raw block device.
> Specifically, for raw block device access, we have implemented a
> simple "userspace filesystem" directly on top of it.
>
> All write/read operations are being tested using buffer_io. However,
> we have observed that the ext4+buffer_io performance significantly
> outperforms raw_block_device+buffer_io:
>
> ext4: write 18G/s, read 40G/s
> raw block device: write 18G/s, read 21G/s
Can you share your exact test case?
I tried the following fio test on both ext4 over nvme and raw nvme, and the
result is the opposite: raw block device throughput is 2X ext4, and it
can be observed in both VM and read hardware.
1) raw NVMe
fio --direct=0 --size=128G --bs=64k --runtime=20 --numjobs=8 --ioengine=psync \
--group_reporting=1 --filename=/dev/nvme0n1 --name=test-read --rw=read
2) ext4
fio --size=1G --time_based --bs=4k --runtime=20 --numjobs=8 \
--ioengine=psync --directory=$DIR --group_reporting=1 \
--unlink=0 --direct=0 --fsync=0 --name=f1 --stonewall --rw=read
>
> We are exploring potential reasons for this difference. One hypothesis
> is related to the page cache radix tree being per inode. Could it be
> that, for the raw_block_device, there is only one radix tree, leading
> to increased lock contention during write/read buffer_io operations?
'perf record/report' should show the hot spot if lock contention is the
reason.
Thanks,
Ming
next prev parent reply other threads:[~2023-11-15 9:19 UTC|newest]
Thread overview: 4+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-11-14 1:57 Performance Difference between ext4 and Raw Block Device Access with buffer_io Ming Lin
2023-11-15 9:19 ` Ming Lei [this message]
2023-11-15 14:20 ` Niklas Cassel
2023-11-15 14:59 ` Ming Lei
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=ZVSNIClnCnmay8e6@fedora \
--to=ming.lei@redhat.com \
--cc=linux-block@vger.kernel.org \
--cc=linux-fsdevel@vger.kernel.org \
--cc=minggr@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).