linux-fsdevel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Ming Lei <ming.lei@redhat.com>
To: Niklas Cassel <Niklas.Cassel@wdc.com>
Cc: Ming Lin <minggr@gmail.com>,
	"linux-block@vger.kernel.org" <linux-block@vger.kernel.org>,
	Linux FS Devel <linux-fsdevel@vger.kernel.org>
Subject: Re: Performance Difference between ext4 and Raw Block Device Access with buffer_io
Date: Wed, 15 Nov 2023 22:59:20 +0800	[thread overview]
Message-ID: <ZVTcyKbHTasef1Py@fedora> (raw)
In-Reply-To: <ZVTTh/LdexBD7BdE@x1-carbon>

On Wed, Nov 15, 2023 at 02:20:02PM +0000, Niklas Cassel wrote:
> On Wed, Nov 15, 2023 at 05:19:28PM +0800, Ming Lei wrote:
> > On Mon, Nov 13, 2023 at 05:57:52PM -0800, Ming Lin wrote:
> > > Hi,
> > > 
> > > We are currently conducting performance tests on an application that
> > > involves writing/reading data to/from ext4 or a raw block device.
> > > Specifically, for raw block device access, we have implemented a
> > > simple "userspace filesystem" directly on top of it.
> > > 
> > > All write/read operations are being tested using buffer_io. However,
> > > we have observed that the ext4+buffer_io performance significantly
> > > outperforms raw_block_device+buffer_io:
> > > 
> > > ext4: write 18G/s, read 40G/s
> > > raw block device: write 18G/s, read 21G/s
> > 
> > Can you share your exact test case?
> > 
> > I tried the following fio test on both ext4 over nvme and raw nvme, and the
> > result is the opposite: raw block device throughput is 2X ext4, and it
> > can be observed in both VM and read hardware.
> > 
> > 1) raw NVMe
> > 
> > fio --direct=0 --size=128G --bs=64k --runtime=20 --numjobs=8 --ioengine=psync \
> >     --group_reporting=1 --filename=/dev/nvme0n1 --name=test-read --rw=read
> > 
> > 2) ext4
> > 
> > fio --size=1G --time_based --bs=4k --runtime=20 --numjobs=8 \
> > 	--ioengine=psync --directory=$DIR --group_reporting=1 \
> > 	--unlink=0 --direct=0 --fsync=0 --name=f1 --stonewall --rw=read
> 
> Hello Ming,
> 
> 1) uses bs=64k, 2) uses bs=4k, was this intentional?

It is a typo, actually both two are taking bs=64k.

> 
> 2) uses stonewall, but 1) doesn't, was this intentional?

To be honest, both are run from different existed two scripts,
just run again by adding --stonewall to raw block test, not see
difference.

> 
> For fairness, you might want to use the same size (1G vs 128G).

For fs test, each io job creates one file and run IO against each file,
but there is only one 'file' in raw block test, and all 8 jobs run
IO on same block device.

So just start one quick randread test, similar gap can be observed
too compared with read test.

> 
> And perhaps clear the page cache before each fio invocation:
> # echo 1 > /proc/sys/vm/drop_caches

Yes, it is always done before running the two buffered IO tests.


thanks,
Ming


      reply	other threads:[~2023-11-15 14:59 UTC|newest]

Thread overview: 4+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-11-14  1:57 Performance Difference between ext4 and Raw Block Device Access with buffer_io Ming Lin
2023-11-15  9:19 ` Ming Lei
2023-11-15 14:20   ` Niklas Cassel
2023-11-15 14:59     ` Ming Lei [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=ZVTcyKbHTasef1Py@fedora \
    --to=ming.lei@redhat.com \
    --cc=Niklas.Cassel@wdc.com \
    --cc=linux-block@vger.kernel.org \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=minggr@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).