linux-fsdevel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Niklas Cassel <Niklas.Cassel@wdc.com>
To: Ming Lei <ming.lei@redhat.com>
Cc: Ming Lin <minggr@gmail.com>,
	"linux-block@vger.kernel.org" <linux-block@vger.kernel.org>,
	Linux FS Devel <linux-fsdevel@vger.kernel.org>
Subject: Re: Performance Difference between ext4 and Raw Block Device Access with buffer_io
Date: Wed, 15 Nov 2023 14:20:02 +0000	[thread overview]
Message-ID: <ZVTTh/LdexBD7BdE@x1-carbon> (raw)
In-Reply-To: <ZVSNIClnCnmay8e6@fedora>

On Wed, Nov 15, 2023 at 05:19:28PM +0800, Ming Lei wrote:
> On Mon, Nov 13, 2023 at 05:57:52PM -0800, Ming Lin wrote:
> > Hi,
> > 
> > We are currently conducting performance tests on an application that
> > involves writing/reading data to/from ext4 or a raw block device.
> > Specifically, for raw block device access, we have implemented a
> > simple "userspace filesystem" directly on top of it.
> > 
> > All write/read operations are being tested using buffer_io. However,
> > we have observed that the ext4+buffer_io performance significantly
> > outperforms raw_block_device+buffer_io:
> > 
> > ext4: write 18G/s, read 40G/s
> > raw block device: write 18G/s, read 21G/s
> 
> Can you share your exact test case?
> 
> I tried the following fio test on both ext4 over nvme and raw nvme, and the
> result is the opposite: raw block device throughput is 2X ext4, and it
> can be observed in both VM and read hardware.
> 
> 1) raw NVMe
> 
> fio --direct=0 --size=128G --bs=64k --runtime=20 --numjobs=8 --ioengine=psync \
>     --group_reporting=1 --filename=/dev/nvme0n1 --name=test-read --rw=read
> 
> 2) ext4
> 
> fio --size=1G --time_based --bs=4k --runtime=20 --numjobs=8 \
> 	--ioengine=psync --directory=$DIR --group_reporting=1 \
> 	--unlink=0 --direct=0 --fsync=0 --name=f1 --stonewall --rw=read

Hello Ming,

1) uses bs=64k, 2) uses bs=4k, was this intentional?

2) uses stonewall, but 1) doesn't, was this intentional?

For fairness, you might want to use the same size (1G vs 128G).

And perhaps clear the page cache before each fio invocation:
# echo 1 > /proc/sys/vm/drop_caches


Kind regards,
Niklas

  reply	other threads:[~2023-11-15 14:20 UTC|newest]

Thread overview: 4+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-11-14  1:57 Performance Difference between ext4 and Raw Block Device Access with buffer_io Ming Lin
2023-11-15  9:19 ` Ming Lei
2023-11-15 14:20   ` Niklas Cassel [this message]
2023-11-15 14:59     ` Ming Lei

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=ZVTTh/LdexBD7BdE@x1-carbon \
    --to=niklas.cassel@wdc.com \
    --cc=linux-block@vger.kernel.org \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=ming.lei@redhat.com \
    --cc=minggr@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).