* Performance Difference between ext4 and Raw Block Device Access with buffer_io
@ 2023-11-14 1:57 Ming Lin
2023-11-15 9:19 ` Ming Lei
0 siblings, 1 reply; 4+ messages in thread
From: Ming Lin @ 2023-11-14 1:57 UTC (permalink / raw)
To: linux-block, Linux FS Devel
Hi,
We are currently conducting performance tests on an application that
involves writing/reading data to/from ext4 or a raw block device.
Specifically, for raw block device access, we have implemented a
simple "userspace filesystem" directly on top of it.
All write/read operations are being tested using buffer_io. However,
we have observed that the ext4+buffer_io performance significantly
outperforms raw_block_device+buffer_io:
ext4: write 18G/s, read 40G/s
raw block device: write 18G/s, read 21G/s
We are exploring potential reasons for this difference. One hypothesis
is related to the page cache radix tree being per inode. Could it be
that, for the raw_block_device, there is only one radix tree, leading
to increased lock contention during write/read buffer_io operations?
Your insights on this matter would be greatly appreciated.
Thanks,
Ming
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: Performance Difference between ext4 and Raw Block Device Access with buffer_io
2023-11-14 1:57 Performance Difference between ext4 and Raw Block Device Access with buffer_io Ming Lin
@ 2023-11-15 9:19 ` Ming Lei
2023-11-15 14:20 ` Niklas Cassel
0 siblings, 1 reply; 4+ messages in thread
From: Ming Lei @ 2023-11-15 9:19 UTC (permalink / raw)
To: Ming Lin; +Cc: linux-block, Linux FS Devel, ming.lei
On Mon, Nov 13, 2023 at 05:57:52PM -0800, Ming Lin wrote:
> Hi,
>
> We are currently conducting performance tests on an application that
> involves writing/reading data to/from ext4 or a raw block device.
> Specifically, for raw block device access, we have implemented a
> simple "userspace filesystem" directly on top of it.
>
> All write/read operations are being tested using buffer_io. However,
> we have observed that the ext4+buffer_io performance significantly
> outperforms raw_block_device+buffer_io:
>
> ext4: write 18G/s, read 40G/s
> raw block device: write 18G/s, read 21G/s
Can you share your exact test case?
I tried the following fio test on both ext4 over nvme and raw nvme, and the
result is the opposite: raw block device throughput is 2X ext4, and it
can be observed in both VM and read hardware.
1) raw NVMe
fio --direct=0 --size=128G --bs=64k --runtime=20 --numjobs=8 --ioengine=psync \
--group_reporting=1 --filename=/dev/nvme0n1 --name=test-read --rw=read
2) ext4
fio --size=1G --time_based --bs=4k --runtime=20 --numjobs=8 \
--ioengine=psync --directory=$DIR --group_reporting=1 \
--unlink=0 --direct=0 --fsync=0 --name=f1 --stonewall --rw=read
>
> We are exploring potential reasons for this difference. One hypothesis
> is related to the page cache radix tree being per inode. Could it be
> that, for the raw_block_device, there is only one radix tree, leading
> to increased lock contention during write/read buffer_io operations?
'perf record/report' should show the hot spot if lock contention is the
reason.
Thanks,
Ming
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: Performance Difference between ext4 and Raw Block Device Access with buffer_io
2023-11-15 9:19 ` Ming Lei
@ 2023-11-15 14:20 ` Niklas Cassel
2023-11-15 14:59 ` Ming Lei
0 siblings, 1 reply; 4+ messages in thread
From: Niklas Cassel @ 2023-11-15 14:20 UTC (permalink / raw)
To: Ming Lei; +Cc: Ming Lin, linux-block@vger.kernel.org, Linux FS Devel
On Wed, Nov 15, 2023 at 05:19:28PM +0800, Ming Lei wrote:
> On Mon, Nov 13, 2023 at 05:57:52PM -0800, Ming Lin wrote:
> > Hi,
> >
> > We are currently conducting performance tests on an application that
> > involves writing/reading data to/from ext4 or a raw block device.
> > Specifically, for raw block device access, we have implemented a
> > simple "userspace filesystem" directly on top of it.
> >
> > All write/read operations are being tested using buffer_io. However,
> > we have observed that the ext4+buffer_io performance significantly
> > outperforms raw_block_device+buffer_io:
> >
> > ext4: write 18G/s, read 40G/s
> > raw block device: write 18G/s, read 21G/s
>
> Can you share your exact test case?
>
> I tried the following fio test on both ext4 over nvme and raw nvme, and the
> result is the opposite: raw block device throughput is 2X ext4, and it
> can be observed in both VM and read hardware.
>
> 1) raw NVMe
>
> fio --direct=0 --size=128G --bs=64k --runtime=20 --numjobs=8 --ioengine=psync \
> --group_reporting=1 --filename=/dev/nvme0n1 --name=test-read --rw=read
>
> 2) ext4
>
> fio --size=1G --time_based --bs=4k --runtime=20 --numjobs=8 \
> --ioengine=psync --directory=$DIR --group_reporting=1 \
> --unlink=0 --direct=0 --fsync=0 --name=f1 --stonewall --rw=read
Hello Ming,
1) uses bs=64k, 2) uses bs=4k, was this intentional?
2) uses stonewall, but 1) doesn't, was this intentional?
For fairness, you might want to use the same size (1G vs 128G).
And perhaps clear the page cache before each fio invocation:
# echo 1 > /proc/sys/vm/drop_caches
Kind regards,
Niklas
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: Performance Difference between ext4 and Raw Block Device Access with buffer_io
2023-11-15 14:20 ` Niklas Cassel
@ 2023-11-15 14:59 ` Ming Lei
0 siblings, 0 replies; 4+ messages in thread
From: Ming Lei @ 2023-11-15 14:59 UTC (permalink / raw)
To: Niklas Cassel; +Cc: Ming Lin, linux-block@vger.kernel.org, Linux FS Devel
On Wed, Nov 15, 2023 at 02:20:02PM +0000, Niklas Cassel wrote:
> On Wed, Nov 15, 2023 at 05:19:28PM +0800, Ming Lei wrote:
> > On Mon, Nov 13, 2023 at 05:57:52PM -0800, Ming Lin wrote:
> > > Hi,
> > >
> > > We are currently conducting performance tests on an application that
> > > involves writing/reading data to/from ext4 or a raw block device.
> > > Specifically, for raw block device access, we have implemented a
> > > simple "userspace filesystem" directly on top of it.
> > >
> > > All write/read operations are being tested using buffer_io. However,
> > > we have observed that the ext4+buffer_io performance significantly
> > > outperforms raw_block_device+buffer_io:
> > >
> > > ext4: write 18G/s, read 40G/s
> > > raw block device: write 18G/s, read 21G/s
> >
> > Can you share your exact test case?
> >
> > I tried the following fio test on both ext4 over nvme and raw nvme, and the
> > result is the opposite: raw block device throughput is 2X ext4, and it
> > can be observed in both VM and read hardware.
> >
> > 1) raw NVMe
> >
> > fio --direct=0 --size=128G --bs=64k --runtime=20 --numjobs=8 --ioengine=psync \
> > --group_reporting=1 --filename=/dev/nvme0n1 --name=test-read --rw=read
> >
> > 2) ext4
> >
> > fio --size=1G --time_based --bs=4k --runtime=20 --numjobs=8 \
> > --ioengine=psync --directory=$DIR --group_reporting=1 \
> > --unlink=0 --direct=0 --fsync=0 --name=f1 --stonewall --rw=read
>
> Hello Ming,
>
> 1) uses bs=64k, 2) uses bs=4k, was this intentional?
It is a typo, actually both two are taking bs=64k.
>
> 2) uses stonewall, but 1) doesn't, was this intentional?
To be honest, both are run from different existed two scripts,
just run again by adding --stonewall to raw block test, not see
difference.
>
> For fairness, you might want to use the same size (1G vs 128G).
For fs test, each io job creates one file and run IO against each file,
but there is only one 'file' in raw block test, and all 8 jobs run
IO on same block device.
So just start one quick randread test, similar gap can be observed
too compared with read test.
>
> And perhaps clear the page cache before each fio invocation:
> # echo 1 > /proc/sys/vm/drop_caches
Yes, it is always done before running the two buffered IO tests.
thanks,
Ming
^ permalink raw reply [flat|nested] 4+ messages in thread
end of thread, other threads:[~2023-11-15 14:59 UTC | newest]
Thread overview: 4+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2023-11-14 1:57 Performance Difference between ext4 and Raw Block Device Access with buffer_io Ming Lin
2023-11-15 9:19 ` Ming Lei
2023-11-15 14:20 ` Niklas Cassel
2023-11-15 14:59 ` Ming Lei
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).