linux-block.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* Re: Sequential read from NVMe/XFS twice slower on Fedora 42 than on Rocky 9.5
       [not found]   ` <CAAiJnjor+=Zn62n09f-aJw2amX2wxQOb-2TB3rea9wDCU7ONoA@mail.gmail.com>
@ 2025-05-04 21:50     ` Dave Chinner
  2025-05-05 12:29       ` Laurence Oberman
  0 siblings, 1 reply; 11+ messages in thread
From: Dave Chinner @ 2025-05-04 21:50 UTC (permalink / raw)
  To: Anton Gavriliuk; +Cc: linux-nvme, linux-xfs, linux-block

[cc linux-block]

[original bug report: https://lore.kernel.org/linux-xfs/CAAiJnjoo0--yp47UKZhbu8sNSZN6DZ-QzmZBMmtr1oC=fOOgAQ@mail.gmail.com/ ]

On Sun, May 04, 2025 at 10:22:58AM +0300, Anton Gavriliuk wrote:
> > What's the comparitive performance of an identical read profile
> > directly on the raw MD raid0 device?
> 
> Rocky 9.5 (5.14.0-503.40.1.el9_5.x86_64)
> 
> [root@localhost ~]# df -mh /mnt
> Filesystem      Size  Used Avail Use% Mounted on
> /dev/md127       35T  1.3T   34T   4% /mnt
> 
> [root@localhost ~]# fio --name=test --rw=read --bs=256k
> --filename=/dev/md127 --direct=1 --numjobs=1 --iodepth=64 --exitall
> --group_reporting --ioengine=libaio --runtime=30 --time_based
> test: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T)
> 256KiB-256KiB, ioengine=libaio, iodepth=64
> fio-3.39-44-g19d9
> Starting 1 process
> Jobs: 1 (f=1): [R(1)][100.0%][r=81.4GiB/s][r=334k IOPS][eta 00m:00s]
> test: (groupid=0, jobs=1): err= 0: pid=43189: Sun May  4 08:22:12 2025
>   read: IOPS=363k, BW=88.5GiB/s (95.1GB/s)(2656GiB/30001msec)
>     slat (nsec): min=971, max=312380, avg=1817.92, stdev=1367.75
>     clat (usec): min=78, max=1351, avg=174.46, stdev=28.86
>      lat (usec): min=80, max=1352, avg=176.27, stdev=28.81
> 
> Fedora 42 (6.14.5-300.fc42.x86_64)
> 
> [root@localhost anton]# df -mh /mnt
> Filesystem      Size  Used Avail Use% Mounted on
> /dev/md127       35T  1.3T   34T   4% /mnt
> 
> [root@localhost ~]# fio --name=test --rw=read --bs=256k
> --filename=/dev/md127 --direct=1 --numjobs=1 --iodepth=64 --exitall
> --group_reporting --ioengine=libaio --runtime=30 --time_based
> test: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T)
> 256KiB-256KiB, ioengine=libaio, iodepth=64
> fio-3.39-44-g19d9
> Starting 1 process
> Jobs: 1 (f=1): [R(1)][100.0%][r=41.0GiB/s][r=168k IOPS][eta 00m:00s]
> test: (groupid=0, jobs=1): err= 0: pid=5685: Sun May  4 10:14:00 2025
>   read: IOPS=168k, BW=41.0GiB/s (44.1GB/s)(1231GiB/30001msec)
>     slat (usec): min=3, max=273, avg= 5.63, stdev= 1.48
>     clat (usec): min=67, max=2800, avg=374.99, stdev=29.90
>      lat (usec): min=72, max=2914, avg=380.62, stdev=30.22

So the MD block device shows the same read performance as the
filesystem on top of it. That means this is a regression at the MD
device layer or in the block/driver layers below it. i.e. it is not
an XFS of filesystem issue at all.

-Dave.
-- 
Dave Chinner
david@fromorbit.com

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: Sequential read from NVMe/XFS twice slower on Fedora 42 than on Rocky 9.5
  2025-05-04 21:50     ` Sequential read from NVMe/XFS twice slower on Fedora 42 than on Rocky 9.5 Dave Chinner
@ 2025-05-05 12:29       ` Laurence Oberman
  2025-05-05 13:21         ` Laurence Oberman
  0 siblings, 1 reply; 11+ messages in thread
From: Laurence Oberman @ 2025-05-05 12:29 UTC (permalink / raw)
  To: Dave Chinner, Anton Gavriliuk; +Cc: linux-nvme, linux-xfs, linux-block

On Mon, 2025-05-05 at 07:50 +1000, Dave Chinner wrote:
> [cc linux-block]
> 
> [original bug report:
> https://lore.kernel.org/linux-xfs/CAAiJnjoo0--yp47UKZhbu8sNSZN6DZ-QzmZBMmtr1oC=fOOgAQ@mail.gmail.com/
>  ]
> 
> On Sun, May 04, 2025 at 10:22:58AM +0300, Anton Gavriliuk wrote:
> > > What's the comparitive performance of an identical read profile
> > > directly on the raw MD raid0 device?
> > 
> > Rocky 9.5 (5.14.0-503.40.1.el9_5.x86_64)
> > 
> > [root@localhost ~]# df -mh /mnt
> > Filesystem      Size  Used Avail Use% Mounted on
> > /dev/md127       35T  1.3T   34T   4% /mnt
> > 
> > [root@localhost ~]# fio --name=test --rw=read --bs=256k
> > --filename=/dev/md127 --direct=1 --numjobs=1 --iodepth=64 --exitall
> > --group_reporting --ioengine=libaio --runtime=30 --time_based
> > test: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T)
> > 256KiB-256KiB, ioengine=libaio, iodepth=64
> > fio-3.39-44-g19d9
> > Starting 1 process
> > Jobs: 1 (f=1): [R(1)][100.0%][r=81.4GiB/s][r=334k IOPS][eta
> > 00m:00s]
> > test: (groupid=0, jobs=1): err= 0: pid=43189: Sun May  4 08:22:12
> > 2025
> >   read: IOPS=363k, BW=88.5GiB/s (95.1GB/s)(2656GiB/30001msec)
> >     slat (nsec): min=971, max=312380, avg=1817.92, stdev=1367.75
> >     clat (usec): min=78, max=1351, avg=174.46, stdev=28.86
> >      lat (usec): min=80, max=1352, avg=176.27, stdev=28.81
> > 
> > Fedora 42 (6.14.5-300.fc42.x86_64)
> > 
> > [root@localhost anton]# df -mh /mnt
> > Filesystem      Size  Used Avail Use% Mounted on
> > /dev/md127       35T  1.3T   34T   4% /mnt
> > 
> > [root@localhost ~]# fio --name=test --rw=read --bs=256k
> > --filename=/dev/md127 --direct=1 --numjobs=1 --iodepth=64 --exitall
> > --group_reporting --ioengine=libaio --runtime=30 --time_based
> > test: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T)
> > 256KiB-256KiB, ioengine=libaio, iodepth=64
> > fio-3.39-44-g19d9
> > Starting 1 process
> > Jobs: 1 (f=1): [R(1)][100.0%][r=41.0GiB/s][r=168k IOPS][eta
> > 00m:00s]
> > test: (groupid=0, jobs=1): err= 0: pid=5685: Sun May  4 10:14:00
> > 2025
> >   read: IOPS=168k, BW=41.0GiB/s (44.1GB/s)(1231GiB/30001msec)
> >     slat (usec): min=3, max=273, avg= 5.63, stdev= 1.48
> >     clat (usec): min=67, max=2800, avg=374.99, stdev=29.90
> >      lat (usec): min=72, max=2914, avg=380.62, stdev=30.22
> 
> So the MD block device shows the same read performance as the
> filesystem on top of it. That means this is a regression at the MD
> device layer or in the block/driver layers below it. i.e. it is not
> an XFS of filesystem issue at all.
> 
> -Dave.

I have a lab setup, let me see if I can also reproduce and then trace
this to see where it is spending the time


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: Sequential read from NVMe/XFS twice slower on Fedora 42 than on Rocky 9.5
  2025-05-05 12:29       ` Laurence Oberman
@ 2025-05-05 13:21         ` Laurence Oberman
  2025-05-05 17:39           ` Laurence Oberman
  2025-05-05 22:56           ` Dave Chinner
  0 siblings, 2 replies; 11+ messages in thread
From: Laurence Oberman @ 2025-05-05 13:21 UTC (permalink / raw)
  To: Dave Chinner, Anton Gavriliuk; +Cc: linux-nvme, linux-xfs, linux-block

On Mon, 2025-05-05 at 08:29 -0400, Laurence Oberman wrote:
> On Mon, 2025-05-05 at 07:50 +1000, Dave Chinner wrote:
> > [cc linux-block]
> > 
> > [original bug report:
> > https://lore.kernel.org/linux-xfs/CAAiJnjoo0--yp47UKZhbu8sNSZN6DZ-QzmZBMmtr1oC=fOOgAQ@mail.gmail.com/
> >  ]
> > 
> > On Sun, May 04, 2025 at 10:22:58AM +0300, Anton Gavriliuk wrote:
> > > > What's the comparitive performance of an identical read profile
> > > > directly on the raw MD raid0 device?
> > > 
> > > Rocky 9.5 (5.14.0-503.40.1.el9_5.x86_64)
> > > 
> > > [root@localhost ~]# df -mh /mnt
> > > Filesystem      Size  Used Avail Use% Mounted on
> > > /dev/md127       35T  1.3T   34T   4% /mnt
> > > 
> > > [root@localhost ~]# fio --name=test --rw=read --bs=256k
> > > --filename=/dev/md127 --direct=1 --numjobs=1 --iodepth=64 --
> > > exitall
> > > --group_reporting --ioengine=libaio --runtime=30 --time_based
> > > test: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB,
> > > (T)
> > > 256KiB-256KiB, ioengine=libaio, iodepth=64
> > > fio-3.39-44-g19d9
> > > Starting 1 process
> > > Jobs: 1 (f=1): [R(1)][100.0%][r=81.4GiB/s][r=334k IOPS][eta
> > > 00m:00s]
> > > test: (groupid=0, jobs=1): err= 0: pid=43189: Sun May  4 08:22:12
> > > 2025
> > >   read: IOPS=363k, BW=88.5GiB/s (95.1GB/s)(2656GiB/30001msec)
> > >     slat (nsec): min=971, max=312380, avg=1817.92, stdev=1367.75
> > >     clat (usec): min=78, max=1351, avg=174.46, stdev=28.86
> > >      lat (usec): min=80, max=1352, avg=176.27, stdev=28.81
> > > 
> > > Fedora 42 (6.14.5-300.fc42.x86_64)
> > > 
> > > [root@localhost anton]# df -mh /mnt
> > > Filesystem      Size  Used Avail Use% Mounted on
> > > /dev/md127       35T  1.3T   34T   4% /mnt
> > > 
> > > [root@localhost ~]# fio --name=test --rw=read --bs=256k
> > > --filename=/dev/md127 --direct=1 --numjobs=1 --iodepth=64 --
> > > exitall
> > > --group_reporting --ioengine=libaio --runtime=30 --time_based
> > > test: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB,
> > > (T)
> > > 256KiB-256KiB, ioengine=libaio, iodepth=64
> > > fio-3.39-44-g19d9
> > > Starting 1 process
> > > Jobs: 1 (f=1): [R(1)][100.0%][r=41.0GiB/s][r=168k IOPS][eta
> > > 00m:00s]
> > > test: (groupid=0, jobs=1): err= 0: pid=5685: Sun May  4 10:14:00
> > > 2025
> > >   read: IOPS=168k, BW=41.0GiB/s (44.1GB/s)(1231GiB/30001msec)
> > >     slat (usec): min=3, max=273, avg= 5.63, stdev= 1.48
> > >     clat (usec): min=67, max=2800, avg=374.99, stdev=29.90
> > >      lat (usec): min=72, max=2914, avg=380.62, stdev=30.22
> > 
> > So the MD block device shows the same read performance as the
> > filesystem on top of it. That means this is a regression at the MD
> > device layer or in the block/driver layers below it. i.e. it is not
> > an XFS of filesystem issue at all.
> > 
> > -Dave.
> 
> I have a lab setup, let me see if I can also reproduce and then trace
> this to see where it is spending the time
> 


Not seeing 1/2 the bandwidth but also significantly slower on Fedora42
kernel.
I will trace it

9.5 kernel - 5.14.0-503.40.1.el9_5.x86_64

Run status group 0 (all jobs):
   READ: bw=14.7GiB/s (15.8GB/s), 14.7GiB/s-14.7GiB/s (15.8GB/s-
15.8GB/s), io=441GiB (473GB), run=30003-30003msec

Fedora42 kernel - 6.14.5-300.fc42.x86_64

Run status group 0 (all jobs):
   READ: bw=10.4GiB/s (11.2GB/s), 10.4GiB/s-10.4GiB/s (11.2GB/s-
11.2GB/s), io=313GiB (336GB), run=30001-30001msec





^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: Sequential read from NVMe/XFS twice slower on Fedora 42 than on Rocky 9.5
  2025-05-05 13:21         ` Laurence Oberman
@ 2025-05-05 17:39           ` Laurence Oberman
  2025-05-22 15:07             ` Laurence Oberman
  2025-05-05 22:56           ` Dave Chinner
  1 sibling, 1 reply; 11+ messages in thread
From: Laurence Oberman @ 2025-05-05 17:39 UTC (permalink / raw)
  To: Dave Chinner, Anton Gavriliuk; +Cc: linux-nvme, linux-xfs, linux-block

On Mon, 2025-05-05 at 09:21 -0400, Laurence Oberman wrote:
> On Mon, 2025-05-05 at 08:29 -0400, Laurence Oberman wrote:
> > On Mon, 2025-05-05 at 07:50 +1000, Dave Chinner wrote:
> > > [cc linux-block]
> > > 
> > > [original bug report:
> > > https://lore.kernel.org/linux-xfs/CAAiJnjoo0--yp47UKZhbu8sNSZN6DZ-QzmZBMmtr1oC=fOOgAQ@mail.gmail.com/
> > >  ]
> > > 
> > > On Sun, May 04, 2025 at 10:22:58AM +0300, Anton Gavriliuk wrote:
> > > > > What's the comparitive performance of an identical read
> > > > > profile
> > > > > directly on the raw MD raid0 device?
> > > > 
> > > > Rocky 9.5 (5.14.0-503.40.1.el9_5.x86_64)
> > > > 
> > > > [root@localhost ~]# df -mh /mnt
> > > > Filesystem      Size  Used Avail Use% Mounted on
> > > > /dev/md127       35T  1.3T   34T   4% /mnt
> > > > 
> > > > [root@localhost ~]# fio --name=test --rw=read --bs=256k
> > > > --filename=/dev/md127 --direct=1 --numjobs=1 --iodepth=64 --
> > > > exitall
> > > > --group_reporting --ioengine=libaio --runtime=30 --time_based
> > > > test: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB,
> > > > (T)
> > > > 256KiB-256KiB, ioengine=libaio, iodepth=64
> > > > fio-3.39-44-g19d9
> > > > Starting 1 process
> > > > Jobs: 1 (f=1): [R(1)][100.0%][r=81.4GiB/s][r=334k IOPS][eta
> > > > 00m:00s]
> > > > test: (groupid=0, jobs=1): err= 0: pid=43189: Sun May  4
> > > > 08:22:12
> > > > 2025
> > > >   read: IOPS=363k, BW=88.5GiB/s (95.1GB/s)(2656GiB/30001msec)
> > > >     slat (nsec): min=971, max=312380, avg=1817.92,
> > > > stdev=1367.75
> > > >     clat (usec): min=78, max=1351, avg=174.46, stdev=28.86
> > > >      lat (usec): min=80, max=1352, avg=176.27, stdev=28.81
> > > > 
> > > > Fedora 42 (6.14.5-300.fc42.x86_64)
> > > > 
> > > > [root@localhost anton]# df -mh /mnt
> > > > Filesystem      Size  Used Avail Use% Mounted on
> > > > /dev/md127       35T  1.3T   34T   4% /mnt
> > > > 
> > > > [root@localhost ~]# fio --name=test --rw=read --bs=256k
> > > > --filename=/dev/md127 --direct=1 --numjobs=1 --iodepth=64 --
> > > > exitall
> > > > --group_reporting --ioengine=libaio --runtime=30 --time_based
> > > > test: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB,
> > > > (T)
> > > > 256KiB-256KiB, ioengine=libaio, iodepth=64
> > > > fio-3.39-44-g19d9
> > > > Starting 1 process
> > > > Jobs: 1 (f=1): [R(1)][100.0%][r=41.0GiB/s][r=168k IOPS][eta
> > > > 00m:00s]
> > > > test: (groupid=0, jobs=1): err= 0: pid=5685: Sun May  4
> > > > 10:14:00
> > > > 2025
> > > >   read: IOPS=168k, BW=41.0GiB/s (44.1GB/s)(1231GiB/30001msec)
> > > >     slat (usec): min=3, max=273, avg= 5.63, stdev= 1.48
> > > >     clat (usec): min=67, max=2800, avg=374.99, stdev=29.90
> > > >      lat (usec): min=72, max=2914, avg=380.62, stdev=30.22
> > > 
> > > So the MD block device shows the same read performance as the
> > > filesystem on top of it. That means this is a regression at the
> > > MD
> > > device layer or in the block/driver layers below it. i.e. it is
> > > not
> > > an XFS of filesystem issue at all.
> > > 
> > > -Dave.
> > 
> > I have a lab setup, let me see if I can also reproduce and then
> > trace
> > this to see where it is spending the time
> > 
> 
> 
> Not seeing 1/2 the bandwidth but also significantly slower on
> Fedora42
> kernel.
> I will trace it
> 
> 9.5 kernel - 5.14.0-503.40.1.el9_5.x86_64
> 
> Run status group 0 (all jobs):
>    READ: bw=14.7GiB/s (15.8GB/s), 14.7GiB/s-14.7GiB/s (15.8GB/s-
> 15.8GB/s), io=441GiB (473GB), run=30003-30003msec
> 
> Fedora42 kernel - 6.14.5-300.fc42.x86_64
> 
> Run status group 0 (all jobs):
>    READ: bw=10.4GiB/s (11.2GB/s), 10.4GiB/s-10.4GiB/s (11.2GB/s-
> 11.2GB/s), io=313GiB (336GB), run=30001-30001msec
> 
> 
> 
> 

Fedora42 kernel issue

While my difference is not as severe we do see a consistently lower
performance on the Fedora
kernel. (6.14.5-300.fc42.x86_64)

When I remove the software raid and run against a single NVME we
converge to be much closer.
Also latest upstream does not show this regression either.

Not sure yet what is in our Fedora kernel causing this. 
We will work it via the Bugzilla

Regards
Laurence

TLDR


Fedora Kernel
-------------
root@penguin9 blktracefedora]# uname -a
Linux penguin9.2 6.14.5-300.fc42.x86_64 #1 SMP PREEMPT_DYNAMIC Fri May
2 14:16:46 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux

5 runs of the fio against /dev/md1

[root@penguin9 ~]# for i in 1 2 3 4 5
> do
> ./run_fio.sh | grep -A1 "Run status group"
> done
Run status group 0 (all jobs):
   READ: bw=11.3GiB/s (12.2GB/s), 11.3GiB/s-11.3GiB/s (12.2GB/s-
12.2GB/s), io=679GiB (729GB), run=60001-60001msec
Run status group 0 (all jobs):
   READ: bw=11.2GiB/s (12.0GB/s), 11.2GiB/s-11.2GiB/s (12.0GB/s-
12.0GB/s), io=669GiB (718GB), run=60001-60001msec
Run status group 0 (all jobs):
   READ: bw=11.4GiB/s (12.2GB/s), 11.4GiB/s-11.4GiB/s (12.2GB/s-
12.2GB/s), io=682GiB (733GB), run=60001-60001msec
Run status group 0 (all jobs):
   READ: bw=11.1GiB/s (11.9GB/s), 11.1GiB/s-11.1GiB/s (11.9GB/s-
11.9GB/s), io=664GiB (713GB), run=60001-60001msec
Run status group 0 (all jobs):
   READ: bw=11.3GiB/s (12.1GB/s), 11.3GiB/s-11.3GiB/s (12.1GB/s-
12.1GB/s), io=678GiB (728GB), run=60001-60001msec

RHEL9.5
------------
Linux penguin9.2 5.14.0-503.40.1.el9_5.x86_64 #1 SMP PREEMPT_DYNAMIC
Thu Apr 24 08:27:29 EDT 2025 x86_64 x86_64 x86_64 GNU/Linux

[root@penguin9 ~]# for i in 1 2 3 4 5; do ./run_fio.sh | grep -A1 "Run
status group"; done
Run status group 0 (all jobs):
   READ: bw=14.9GiB/s (16.0GB/s), 14.9GiB/s-14.9GiB/s (16.0GB/s-
16.0GB/s), io=894GiB (960GB), run=60003-60003msec
Run status group 0 (all jobs):
   READ: bw=14.6GiB/s (15.6GB/s), 14.6GiB/s-14.6GiB/s (15.6GB/s-
15.6GB/s), io=873GiB (938GB), run=60003-60003msec
Run status group 0 (all jobs):
   READ: bw=14.9GiB/s (16.0GB/s), 14.9GiB/s-14.9GiB/s (16.0GB/s-
16.0GB/s), io=892GiB (958GB), run=60003-60003msec
Run status group 0 (all jobs):
   READ: bw=14.5GiB/s (15.6GB/s), 14.5GiB/s-14.5GiB/s (15.6GB/s-
15.6GB/s), io=872GiB (936GB), run=60003-60003msec
Run status group 0 (all jobs):
   READ: bw=14.7GiB/s (15.8GB/s), 14.7GiB/s-14.7GiB/s (15.8GB/s-
15.8GB/s), io=884GiB (950GB), run=60003-60003msec


Remove software raid from the layers and test just on a single nvme
----------------------------------------------------------------------

fio --name=test --rw=read --bs=256k --filename=/dev/nvme23n1 --direct=1
--numjobs=1 --iodepth=64 --exitall --group_reporting --ioengine=libaio
--runtime=60 --time_based

Linux penguin9.2 5.14.0-503.40.1.el9_5.x86_64 #1 SMP PREEMPT_DYNAMIC
Thu Apr 24 08:27:29 EDT 2025 x86_64 x86_64 x86_64 GNU/Linux

[root@penguin9 ~]# ./run_nvme_fio.sh

Run status group 0 (all jobs):
   READ: bw=3207MiB/s (3363MB/s), 3207MiB/s-3207MiB/s (3363MB/s-
3363MB/s), io=188GiB (202GB), run=60005-60005msec


Back to fedora kernel

[root@penguin9 ~]# uname -a
Linux penguin9.2 6.14.5-300.fc42.x86_64 #1 SMP PREEMPT_DYNAMIC Fri May
2 14:16:46 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux

Within the margin of error

Run status group 0 (all jobs):
   READ: bw=3061MiB/s (3210MB/s), 3061MiB/s-3061MiB/s (3210MB/s-
3210MB/s), io=179GiB (193GB), run=60006-60006msec


Try recent upstream kernel
---------------------------
[root@penguin9 ~]# uname -a
Linux penguin9.2 6.13.0-rc7+ #2 SMP PREEMPT_DYNAMIC Mon May  5 10:59:12
EDT 2025 x86_64 x86_64 x86_64 GNU/Linux

[root@penguin9 ~]# for i in 1 2 3 4 5; do ./run_fio.sh | grep -A1 "Run
status group"; done
Run status group 0 (all jobs):
   READ: bw=14.6GiB/s (15.7GB/s), 14.6GiB/s-14.6GiB/s (15.7GB/s-
15.7GB/s), io=876GiB (941GB), run=60003-60003msec
Run status group 0 (all jobs):
   READ: bw=14.8GiB/s (15.9GB/s), 14.8GiB/s-14.8GiB/s (15.9GB/s-
15.9GB/s), io=891GiB (957GB), run=60003-60003msec
Run status group 0 (all jobs):
   READ: bw=14.8GiB/s (15.9GB/s), 14.8GiB/s-14.8GiB/s (15.9GB/s-
15.9GB/s), io=890GiB (956GB), run=60003-60003msec
Run status group 0 (all jobs):
   READ: bw=14.5GiB/s (15.6GB/s), 14.5GiB/s-14.5GiB/s (15.6GB/s-
15.6GB/s), io=871GiB (935GB), run=60003-60003msec


Update to latest upstream
-------------------------

[root@penguin9 ~]# uname -a
Linux penguin9.2 6.15.0-rc5 #1 SMP PREEMPT_DYNAMIC Mon May  5 12:18:22
EDT 2025 x86_64 x86_64 x86_64 GNU/Linux

Single nvme device is once again fine

Run status group 0 (all jobs):
   READ: bw=3061MiB/s (3210MB/s), 3061MiB/s-3061MiB/s (3210MB/s-
3210MB/s), io=179GiB (193GB), run=60006-60006msec


[root@penguin9 ~]# for i in 1 2 3 4 5; do ./run_fio.sh | grep -A1 "Run
status group"; done
Run status group 0 (all jobs):
   READ: bw=14.7GiB/s (15.7GB/s), 14.7GiB/s-14.7GiB/s (15.7GB/s-
15.7GB/s), io=880GiB (945GB), run=60003-60003msec
Run status group 0 (all jobs):
   READ: bw=18.1GiB/s (19.4GB/s), 18.1GiB/s-18.1GiB/s (19.4GB/s-
19.4GB/s), io=1087GiB (1167GB), run=60003-60003msec
Run status group 0 (all jobs):
   READ: bw=18.0GiB/s (19.4GB/s), 18.0GiB/s-18.0GiB/s (19.4GB/s-
19.4GB/s), io=1082GiB (1162GB), run=60003-60003msec
Run status group 0 (all jobs):
   READ: bw=18.2GiB/s (19.5GB/s), 18.2GiB/s-18.2GiB/s (19.5GB/s-
19.5GB/s), io=1090GiB (1170GB), run=60005-60005msec



^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: Sequential read from NVMe/XFS twice slower on Fedora 42 than on Rocky 9.5
  2025-05-05 13:21         ` Laurence Oberman
  2025-05-05 17:39           ` Laurence Oberman
@ 2025-05-05 22:56           ` Dave Chinner
  2025-05-06 11:03             ` Anton Gavriliuk
  1 sibling, 1 reply; 11+ messages in thread
From: Dave Chinner @ 2025-05-05 22:56 UTC (permalink / raw)
  To: Laurence Oberman; +Cc: Anton Gavriliuk, linux-nvme, linux-xfs, linux-block

On Mon, May 05, 2025 at 09:21:19AM -0400, Laurence Oberman wrote:
> On Mon, 2025-05-05 at 08:29 -0400, Laurence Oberman wrote:
> > On Mon, 2025-05-05 at 07:50 +1000, Dave Chinner wrote:
> > > So the MD block device shows the same read performance as the
> > > filesystem on top of it. That means this is a regression at the MD
> > > device layer or in the block/driver layers below it. i.e. it is not
> > > an XFS of filesystem issue at all.
> > > 
> > > -Dave.
> > 
> > I have a lab setup, let me see if I can also reproduce and then trace
> > this to see where it is spending the time
> > 
> 
> 
> Not seeing 1/2 the bandwidth but also significantly slower on Fedora42
> kernel.
> I will trace it
> 
> 9.5 kernel - 5.14.0-503.40.1.el9_5.x86_64
> 
> Run status group 0 (all jobs):
>    READ: bw=14.7GiB/s (15.8GB/s), 14.7GiB/s-14.7GiB/s (15.8GB/s-
> 15.8GB/s), io=441GiB (473GB), run=30003-30003msec
> 
> Fedora42 kernel - 6.14.5-300.fc42.x86_64
> 
> Run status group 0 (all jobs):
>    READ: bw=10.4GiB/s (11.2GB/s), 10.4GiB/s-10.4GiB/s (11.2GB/s-
> 11.2GB/s), io=313GiB (336GB), run=30001-30001msec

So is this MD chunk size related? i.e. what is the chunk size
the MD device? Is it smaller than the IO size (256kB) or larger?
Does the regression go away if the chunk size matches the IO size,
or if the IO size vs chunk size relationship is reversed?

-Dave.
-- 
Dave Chinner
david@fromorbit.com

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: Sequential read from NVMe/XFS twice slower on Fedora 42 than on Rocky 9.5
  2025-05-05 22:56           ` Dave Chinner
@ 2025-05-06 11:03             ` Anton Gavriliuk
  2025-05-06 21:46               ` Dave Chinner
  0 siblings, 1 reply; 11+ messages in thread
From: Anton Gavriliuk @ 2025-05-06 11:03 UTC (permalink / raw)
  To: Dave Chinner; +Cc: Laurence Oberman, linux-nvme, linux-xfs, linux-block

> So is this MD chunk size related? i.e. what is the chunk size
> the MD device? Is it smaller than the IO size (256kB) or larger?
> Does the regression go away if the chunk size matches the IO size,
> or if the IO size vs chunk size relationship is reversed?

According to the output below, the chunk size is 512K,

[root@localhost anton]# mdadm -D /dev/md127
/dev/md127:
           Version : 1.2
     Creation Time : Thu Apr 17 14:58:23 2025
        Raid Level : raid0
        Array Size : 37505814528 (34.93 TiB 38.41 TB)
      Raid Devices : 12
     Total Devices : 12
       Persistence : Superblock is persistent

       Update Time : Thu Apr 17 14:58:23 2025
             State : clean
    Active Devices : 12
   Working Devices : 12
    Failed Devices : 0
     Spare Devices : 0

            Layout : original
        Chunk Size : 512K

Consistency Policy : none

              Name : localhost.localdomain:127  (local to host
localhost.localdomain)
              UUID : 2fadc96b:f37753af:f3b528a0:067c320d
            Events : 0

    Number   Major   Minor   RaidDevice State
       0     259       15        0      active sync   /dev/nvme7n1
       1     259       27        1      active sync   /dev/nvme0n1
       2     259       10        2      active sync   /dev/nvme1n1
       3     259       28        3      active sync   /dev/nvme2n1
       4     259       13        4      active sync   /dev/nvme8n1
       5     259       22        5      active sync   /dev/nvme5n1
       6     259       26        6      active sync   /dev/nvme3n1
       7     259       16        7      active sync   /dev/nvme4n1
       8     259       24        8      active sync   /dev/nvme9n1
       9     259       14        9      active sync   /dev/nvme10n1
      10     259       25       10      active sync   /dev/nvme11n1
      11     259       12       11      active sync   /dev/nvme12n1
[root@localhost anton]# uname -r
6.14.5-300.fc42.x86_64
[root@localhost anton]# cat /proc/mdstat
Personalities : [raid0]
md127 : active raid0 nvme4n1[7] nvme1n1[2] nvme12n1[11] nvme7n1[0]
nvme9n1[8] nvme11n1[10] nvme2n1[3] nvme8n1[4] nvme0n1[1] nvme5n1[5]
nvme3n1[6] nvme10n1[9]
      37505814528 blocks super 1.2 512k chunks

unused devices: <none>
[root@localhost anton]#

When I/O size is less 512K

[root@localhost ~]# fio --name=test --rw=read --bs=256k
--filename=/dev/md127 --direct=1 --numjobs=1 --iodepth=64 --exitall
--group_reporting --ioengine=libaio --runtime=30 --time_based
test: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T)
256KiB-256KiB, ioengine=libaio, iodepth=64
fio-3.39-44-g19d9
Starting 1 process
Jobs: 1 (f=1): [R(1)][100.0%][r=48.1GiB/s][r=197k IOPS][eta 00m:00s]
test: (groupid=0, jobs=1): err= 0: pid=14340: Tue May  6 13:59:23 2025
  read: IOPS=197k, BW=48.0GiB/s (51.6GB/s)(1441GiB/30001msec)
    slat (usec): min=3, max=1041, avg= 4.74, stdev= 1.48
    clat (usec): min=76, max=2042, avg=320.30, stdev=26.82
     lat (usec): min=79, max=2160, avg=325.04, stdev=27.08

When I/O size is greater 512K

[root@localhost ~]# fio --name=test --rw=read --bs=1024k
--filename=/dev/md127 --direct=1 --numjobs=1 --iodepth=64 --exitall
--group_reporting --ioengine=libaio --runtime=30 --time_based
test: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T)
1024KiB-1024KiB, ioengine=libaio, iodepth=64
fio-3.39-44-g19d9
Starting 1 process
Jobs: 1 (f=1): [R(1)][100.0%][r=63.7GiB/s][r=65.2k IOPS][eta 00m:00s]
test: (groupid=0, jobs=1): err= 0: pid=14395: Tue May  6 14:00:28 2025
  read: IOPS=64.6k, BW=63.0GiB/s (67.7GB/s)(1891GiB/30001msec)
    slat (usec): min=9, max=1045, avg=15.12, stdev= 3.84
    clat (usec): min=81, max=18494, avg=975.87, stdev=112.11
     lat (usec): min=96, max=18758, avg=990.99, stdev=113.49

But still much worse than with 256k on Rocky 9.5

Anton

вт, 6 мая 2025 г. в 01:56, Dave Chinner <david@fromorbit.com>:
>
> On Mon, May 05, 2025 at 09:21:19AM -0400, Laurence Oberman wrote:
> > On Mon, 2025-05-05 at 08:29 -0400, Laurence Oberman wrote:
> > > On Mon, 2025-05-05 at 07:50 +1000, Dave Chinner wrote:
> > > > So the MD block device shows the same read performance as the
> > > > filesystem on top of it. That means this is a regression at the MD
> > > > device layer or in the block/driver layers below it. i.e. it is not
> > > > an XFS of filesystem issue at all.
> > > >
> > > > -Dave.
> > >
> > > I have a lab setup, let me see if I can also reproduce and then trace
> > > this to see where it is spending the time
> > >
> >
> >
> > Not seeing 1/2 the bandwidth but also significantly slower on Fedora42
> > kernel.
> > I will trace it
> >
> > 9.5 kernel - 5.14.0-503.40.1.el9_5.x86_64
> >
> > Run status group 0 (all jobs):
> >    READ: bw=14.7GiB/s (15.8GB/s), 14.7GiB/s-14.7GiB/s (15.8GB/s-
> > 15.8GB/s), io=441GiB (473GB), run=30003-30003msec
> >
> > Fedora42 kernel - 6.14.5-300.fc42.x86_64
> >
> > Run status group 0 (all jobs):
> >    READ: bw=10.4GiB/s (11.2GB/s), 10.4GiB/s-10.4GiB/s (11.2GB/s-
> > 11.2GB/s), io=313GiB (336GB), run=30001-30001msec
>
> So is this MD chunk size related? i.e. what is the chunk size
> the MD device? Is it smaller than the IO size (256kB) or larger?
> Does the regression go away if the chunk size matches the IO size,
> or if the IO size vs chunk size relationship is reversed?
>
> -Dave.
> --
> Dave Chinner
> david@fromorbit.com

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: Sequential read from NVMe/XFS twice slower on Fedora 42 than on Rocky 9.5
  2025-05-06 11:03             ` Anton Gavriliuk
@ 2025-05-06 21:46               ` Dave Chinner
  2025-05-07 12:26                 ` Anton Gavriliuk
  0 siblings, 1 reply; 11+ messages in thread
From: Dave Chinner @ 2025-05-06 21:46 UTC (permalink / raw)
  To: Anton Gavriliuk; +Cc: Laurence Oberman, linux-nvme, linux-xfs, linux-block

On Tue, May 06, 2025 at 02:03:37PM +0300, Anton Gavriliuk wrote:
> > So is this MD chunk size related? i.e. what is the chunk size
> > the MD device? Is it smaller than the IO size (256kB) or larger?
> > Does the regression go away if the chunk size matches the IO size,
> > or if the IO size vs chunk size relationship is reversed?
> 
> According to the output below, the chunk size is 512K,

Ok.

`iostat -dxm 5` output during the fio run on both kernels will give
us some indication of the differences in IO patterns, queue depths,
etc.

Silly question: if you use DM to create the same RAID 0 array
with a dm table such as:

0 75011629056 striped 12 1024 /dev/nvme7n1 0 /dev/nvme0n1 0 ....  /dev/nvme12n1 0

to create a similar 38TB raid 0 array, do you see the same perf
degradation?

-Dave.
-- 
Dave Chinner
david@fromorbit.com

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: Sequential read from NVMe/XFS twice slower on Fedora 42 than on Rocky 9.5
  2025-05-06 21:46               ` Dave Chinner
@ 2025-05-07 12:26                 ` Anton Gavriliuk
  2025-05-07 21:59                   ` Dave Chinner
  0 siblings, 1 reply; 11+ messages in thread
From: Anton Gavriliuk @ 2025-05-07 12:26 UTC (permalink / raw)
  To: Dave Chinner; +Cc: Laurence Oberman, linux-nvme, linux-xfs, linux-block

[-- Attachment #1: Type: text/plain, Size: 2937 bytes --]

> `iostat -dxm 5` output during the fio run on both kernels will give us some indication of the differences in IO patterns, queue depths, etc.

iostat files attached.

fedora 42

[root@localhost ~]# fio --name=test --rw=read --bs=256k
--filename=/mnt/testfile --direct=1 --numjobs=1 --iodepth=64 --exitall
--group_reporting --ioengine=libaio --runtime=30 --time_based
test: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T)
256KiB-256KiB, ioengine=libaio, iodepth=64
fio-3.39-44-g19d9
Starting 1 process
Jobs: 1 (f=1): [R(1)][100.0%][r=43.6GiB/s][r=179k IOPS][eta 00m:00s]
test: (groupid=0, jobs=1): err= 0: pid=18826: Wed May  7 13:44:38 2025
  read: IOPS=178k, BW=43.4GiB/s (46.7GB/s)(1303GiB/30001msec)
    slat (usec): min=3, max=267, avg= 5.29, stdev= 1.62
    clat (usec): min=147, max=2549, avg=354.18, stdev=28.87
     lat (usec): min=150, max=2657, avg=359.47, stdev=29.15

rocky 9.5

[root@localhost ~]# fio --name=test --rw=read --bs=256k
--filename=/mnt/testfile --direct=1 --numjobs=1 --iodepth=64 --exitall
--group_reporting --ioengine=libaio --runtime=30 --time_based
test: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T)
256KiB-256KiB, ioengine=libaio, iodepth=64
fio-3.39-44-g19d9
Starting 1 process
Jobs: 1 (f=1): [R(1)][100.0%][r=98.3GiB/s][r=403k IOPS][eta 00m:00s]
test: (groupid=0, jobs=1): err= 0: pid=10500: Wed May  7 15:16:39 2025
  read: IOPS=403k, BW=98.4GiB/s (106GB/s)(2951GiB/30001msec)
    slat (nsec): min=1101, max=156185, avg=2087.89, stdev=1415.57
    clat (usec): min=82, max=951, avg=156.56, stdev=20.19
     lat (usec): min=83, max=1078, avg=158.65, stdev=20.25

> Silly question: if you use DM to create the same RAID 0 array with a dm table such as:
> 0 75011629056 striped 12 1024 /dev/nvme7n1 0 /dev/nvme0n1 0 ....  /dev/nvme12n1 0
> to create a similar 38TB raid 0 array, do you see the same perf degradation?

Will check that tomorrow.

Anton

ср, 7 мая 2025 г. в 00:46, Dave Chinner <david@fromorbit.com>:
>
> On Tue, May 06, 2025 at 02:03:37PM +0300, Anton Gavriliuk wrote:
> > > So is this MD chunk size related? i.e. what is the chunk size
> > > the MD device? Is it smaller than the IO size (256kB) or larger?
> > > Does the regression go away if the chunk size matches the IO size,
> > > or if the IO size vs chunk size relationship is reversed?
> >
> > According to the output below, the chunk size is 512K,
>
> Ok.
>
> `iostat -dxm 5` output during the fio run on both kernels will give
> us some indication of the differences in IO patterns, queue depths,
> etc.
>
> Silly question: if you use DM to create the same RAID 0 array
> with a dm table such as:
>
> 0 75011629056 striped 12 1024 /dev/nvme7n1 0 /dev/nvme0n1 0 ....  /dev/nvme12n1 0
>
> to create a similar 38TB raid 0 array, do you see the same perf
> degradation?
>
> -Dave.
> --
> Dave Chinner
> david@fromorbit.com

[-- Attachment #2: rocky_95_iostat_dxm_5 --]
[-- Type: application/octet-stream, Size: 30896 bytes --]

Linux 5.14.0-503.40.1.el9_5.x86_64 (localhost.localdomain) 	05/07/2025 	_x86_64_	(48 CPU)

Device            r/s     rMB/s   rrqm/s  %rrqm r_await rareq-sz     w/s     wMB/s   wrqm/s  %wrqm w_await wareq-sz     d/s     dMB/s   drqm/s  %drqm d_await dareq-sz     f/s f_await  aqu-sz  %util
dm-0             9.50      0.54     0.00   0.00    0.12    58.52    2.46      0.13     0.00   0.00    0.62    53.28    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    0.00   0.88
dm-1             0.16      0.00     0.00   0.00    0.12    21.35    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    0.00   0.00
dm-2             0.22      0.00     0.00   0.00    0.08     8.66    1.87      0.15     0.00   0.00    0.49    82.49    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    0.00   0.03
md127         56228.39  14018.86     0.00   0.00    0.12   255.30    0.01      0.00     0.00   0.00    0.00   256.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    6.56  14.54
nvme0c0n1     4685.58   1168.23     0.56   0.01    0.12   255.31    0.00      0.00     0.00   0.00    0.00   256.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    0.54  14.53
nvme0n1       4685.58   1168.23     0.00   0.00    0.12   255.31    0.00      0.00     0.00   0.00    0.00   256.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    0.54  14.53
nvme10c10n1   4685.79   1168.24     0.00   0.00    0.12   255.30    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    0.54  14.53
nvme10n1      4685.80   1168.24     0.00   0.00    0.12   255.30    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    0.54  14.53
nvme11c11n1   4685.59   1168.24     0.00   0.00    0.12   255.31    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    0.54  14.53
nvme11n1      4685.59   1168.24     0.00   0.00    0.12   255.31    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    0.54  14.53
nvme12c12n1   4685.61   1168.24     0.56   0.01    0.12   255.31    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    0.54  14.53
nvme12n1      4685.63   1168.24     0.00   0.00    0.12   255.31    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    0.54  14.53
nvme1c1n1     4685.61   1168.24     0.00   0.00    0.12   255.31    0.00      0.00     0.00   0.00    0.00   256.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    0.54  14.53
nvme1n1       4685.63   1168.24     0.00   0.00    0.12   255.31    0.00      0.00     0.00   0.00    0.00   256.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    0.54  14.53
nvme2c2n1     4685.73   1168.24     0.00   0.00    0.12   255.30    0.00      0.00     0.00   0.00    0.00   288.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    0.54  14.53
nvme2n1       4685.74   1168.24     0.00   0.00    0.12   255.30    0.00      0.00     0.00   0.00    0.00   288.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    0.55  14.53
nvme3c3n1     4685.63   1168.24     0.00   0.00    0.12   255.31    0.00      0.00     0.00   0.00    0.00   224.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    0.55  14.53
nvme3n1       4685.61   1168.24     0.00   0.00    0.12   255.31    0.00      0.00     0.00   0.00    0.00   224.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    0.55  14.53
nvme4c4n1     4685.63   1168.25     0.00   0.00    0.12   255.31    0.00      0.00     0.00   0.00    0.00   256.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    0.54  14.53
nvme4n1       4685.63   1168.24     0.00   0.00    0.12   255.31    0.00      0.00     0.00   0.00    0.00   256.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    0.54  14.53
nvme5c5n1       13.57      0.60     1.51  10.00    0.22    44.99    4.91      0.33     1.55  23.97    4.20    69.50    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    0.02   1.00
nvme5n1         13.57      0.60     0.00   0.00    0.23    44.99    4.91      0.33     0.00   0.00    4.19    69.50    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    0.02   0.23
nvme6c6n1     4685.59   1168.24     0.00   0.00    0.12   255.31    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    0.54  14.53
nvme6n1       4685.57   1168.24     0.00   0.00    0.12   255.31    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    0.54  14.53
nvme7c7n1     4685.62   1168.24     0.00   0.00    0.12   255.31    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    0.54  14.53
nvme7n1       4685.63   1168.24     0.00   0.00    0.12   255.31    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    0.54  14.53
nvme8c8n1     4685.59   1168.24     0.55   0.01    0.12   255.31    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    0.54  14.53
nvme8n1       4685.59   1168.24     0.00   0.00    0.12   255.31    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    0.55  14.53
nvme9c9n1     4685.61   1168.24     0.00   0.00    0.12   255.31    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    0.54  14.53
nvme9n1       4685.60   1168.24     0.00   0.00    0.12   255.31    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    0.54  14.53


Device            r/s     rMB/s   rrqm/s  %rrqm r_await rareq-sz     w/s     wMB/s   wrqm/s  %wrqm w_await wareq-sz     d/s     dMB/s   drqm/s  %drqm d_await dareq-sz     f/s f_await  aqu-sz  %util
dm-0             0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    0.00   0.00
dm-1             0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    0.00   0.00
dm-2             0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    0.00   0.00
md127         403871.40 100964.85     0.00   0.00    0.12   255.99    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00   47.35 100.00
nvme0c0n1     33654.80   8413.70     4.20   0.01    0.12   256.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    3.92  99.96
nvme0n1       33655.20   8413.80     0.00   0.00    0.12   256.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    3.92  99.96
nvme10c10n1   33654.80   8413.70     0.00   0.00    0.12   256.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    3.90  99.92
nvme10n1      33654.40   8413.60     0.00   0.00    0.12   256.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    3.91  99.92
nvme11c11n1   33654.40   8413.60     0.00   0.00    0.12   256.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    3.94 100.00
nvme11n1      33654.80   8413.70     0.00   0.00    0.12   256.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    3.94 100.00
nvme12c12n1   33654.80   8413.70     4.00   0.01    0.12   256.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    3.92  99.96
nvme12n1      33654.00   8413.50     0.00   0.00    0.12   256.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    3.92  99.96
nvme1c1n1     33655.40   8413.85     0.00   0.00    0.12   256.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    3.92  99.92
nvme1n1       33654.40   8413.60     0.00   0.00    0.12   256.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    3.93  99.92
nvme2c2n1     33655.60   8413.90     0.00   0.00    0.12   256.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    3.93  99.96
nvme2n1       33654.60   8413.65     0.00   0.00    0.12   256.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    3.94  99.94
nvme3c3n1     33654.20   8413.55     0.00   0.00    0.12   256.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    3.94  99.94
nvme3n1       33655.00   8413.75     0.00   0.00    0.12   256.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    3.94  99.94
nvme4c4n1     33654.00   8413.50     0.00   0.00    0.12   256.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    3.93  99.94
nvme4n1       33654.20   8413.55     0.00   0.00    0.12   256.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    3.93  99.96
nvme5c5n1        0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    0.00   0.00
nvme5n1          0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    0.00   0.00
nvme6c6n1     33654.20   8413.55     0.00   0.00    0.12   256.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    3.91  99.94
nvme6n1       33654.80   8413.70     0.00   0.00    0.12   256.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    3.93  99.94
nvme7c7n1     33654.40   8413.60     0.00   0.00    0.12   256.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    3.93  99.98
nvme7n1       33653.80   8413.45     0.00   0.00    0.12   256.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    3.94  99.98
nvme8c8n1     33654.60   8413.65     3.80   0.01    0.12   256.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    3.95  99.98
nvme8n1       33654.60   8413.65     0.00   0.00    0.12   256.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    3.96  99.98
nvme9c9n1     33654.40   8413.65     0.00   0.00    0.12   256.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    3.90  99.98
nvme9n1       33654.80   8413.70     0.00   0.00    0.12   256.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    3.91  99.96


Device            r/s     rMB/s   rrqm/s  %rrqm r_await rareq-sz     w/s     wMB/s   wrqm/s  %wrqm w_await wareq-sz     d/s     dMB/s   drqm/s  %drqm d_await dareq-sz     f/s f_await  aqu-sz  %util
dm-0             0.00      0.00     0.00   0.00    0.00     0.00    0.20      0.00     0.00   0.00    0.00    16.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    0.00   0.00
dm-1             0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    0.00   0.00
dm-2             0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    0.00   0.00
md127         404956.20 100416.80     0.00   0.00    0.12   253.92    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00   47.36 100.00
nvme0c0n1     33745.00   8368.00     4.00   0.01    0.12   253.93    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    3.92  99.90
nvme0n1       33745.00   8368.00     0.00   0.00    0.12   253.93    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    3.94  99.90
nvme10c10n1   33745.60   8368.10     0.00   0.00    0.12   253.93    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    3.90  99.98
nvme10n1      33745.60   8368.10     0.00   0.00    0.12   253.93    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    3.91  99.98
nvme11c11n1   33745.00   8368.00     0.00   0.00    0.12   253.93    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    3.92  99.92
nvme11n1      33745.00   8368.00     0.00   0.00    0.12   253.93    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    3.93  99.92
nvme12c12n1   33745.40   8368.05     4.00   0.01    0.12   253.93    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    3.92  99.94
nvme12n1      33745.20   8368.00     0.00   0.00    0.12   253.93    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    3.93  99.94
nvme1c1n1     33745.20   8368.05     0.00   0.00    0.12   253.93    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    3.91  99.96
nvme1n1       33745.40   8368.10     0.00   0.00    0.12   253.93    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    3.94  99.96
nvme2c2n1     33745.20   8368.05     0.00   0.00    0.12   253.93    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    3.91  99.96
nvme2n1       33745.60   8368.15     0.00   0.00    0.12   253.93    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    3.93  99.96
nvme3c3n1     33745.20   8368.05     0.00   0.00    0.12   253.93    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    3.95  99.98
nvme3n1       33745.40   8368.10     0.00   0.00    0.12   253.93    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    3.97  99.96
nvme4c4n1     33745.20   8368.05     0.00   0.00    0.12   253.93    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    3.91  99.90
nvme4n1       33745.40   8368.10     0.00   0.00    0.12   253.93    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    3.92  99.90
nvme5c5n1        0.00      0.00     0.00   0.00    0.00     0.00    0.20      0.00     0.00   0.00    0.00    16.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    0.00   0.00
nvme5n1          0.00      0.00     0.00   0.00    0.00     0.00    0.20      0.00     0.00   0.00    0.00    16.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    0.00   0.00
nvme6c6n1     33745.00   8368.00     0.00   0.00    0.12   253.93    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    3.90  99.96
nvme6n1       33745.00   8368.00     0.00   0.00    0.12   253.93    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    3.90  99.96
nvme7c7n1     33745.40   8368.05     0.00   0.00    0.12   253.93    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    3.93  99.94
nvme7n1       33745.80   8368.15     0.00   0.00    0.12   253.93    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    3.94  99.94
nvme8c8n1     33745.00   8367.95     4.00   0.01    0.12   253.93    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    3.92  99.98
nvme8n1       33745.00   8367.95     0.00   0.00    0.12   253.93    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    3.93  99.98
nvme9c9n1     33745.40   8368.00     0.00   0.00    0.12   253.93    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    3.93  99.96
nvme9n1       33745.60   8368.10     0.00   0.00    0.12   253.93    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    3.95  99.96


Device            r/s     rMB/s   rrqm/s  %rrqm r_await rareq-sz     w/s     wMB/s   wrqm/s  %wrqm w_await wareq-sz     d/s     dMB/s   drqm/s  %drqm d_await dareq-sz     f/s f_await  aqu-sz  %util
dm-0             0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    0.00   0.00
dm-1             0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    0.00   0.00
dm-2             0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    0.00   0.00
md127         403128.60 100779.15     0.00   0.00    0.12   255.99    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00   47.30 100.00
nvme0c0n1     33593.20   8398.30     4.00   0.01    0.12   256.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    3.91  99.98
nvme0n1       33593.20   8398.30     0.00   0.00    0.12   256.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    3.91  99.98
nvme10c10n1   33593.20   8398.30     0.00   0.00    0.12   256.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    3.90  99.94
nvme10n1      33593.20   8398.30     0.00   0.00    0.12   256.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    3.89  99.94
nvme11c11n1   33593.20   8398.30     0.00   0.00    0.12   256.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    3.93  99.94
nvme11n1      33593.00   8398.25     0.00   0.00    0.12   256.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    3.94  99.94
nvme12c12n1   33593.20   8398.30     4.00   0.01    0.12   256.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    3.90 100.00
nvme12n1      33593.60   8398.40     0.00   0.00    0.12   256.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    3.92 100.00
nvme1c1n1     33593.20   8398.30     0.00   0.00    0.12   256.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    3.91  99.98
nvme1n1       33593.00   8398.25     0.00   0.00    0.12   256.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    3.92  99.98
nvme2c2n1     33593.40   8398.35     0.00   0.00    0.12   256.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    3.93  99.94
nvme2n1       33593.00   8398.25     0.00   0.00    0.12   256.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    3.93  99.94
nvme3c3n1     33593.20   8398.30     0.00   0.00    0.12   256.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    3.93 100.00
nvme3n1       33593.00   8398.25     0.00   0.00    0.12   256.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    3.96 100.00
nvme4c4n1     33593.00   8398.25     0.00   0.00    0.12   256.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    3.92  99.92
nvme4n1       33592.80   8398.20     0.00   0.00    0.12   256.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    3.93  99.92
nvme5c5n1        0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    0.00   0.00
nvme5n1          0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    0.00   0.00
nvme6c6n1     33593.20   8398.30     0.00   0.00    0.12   256.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    3.90  99.96
nvme6n1       33593.00   8398.25     0.00   0.00    0.12   256.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    3.91  99.96
nvme7c7n1     33593.40   8398.35     0.00   0.00    0.12   256.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    3.93  99.92
nvme7n1       33593.40   8398.35     0.00   0.00    0.12   256.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    3.94  99.92
nvme8c8n1     33593.40   8398.35     4.00   0.01    0.12   256.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    3.95 100.00
nvme8n1       33593.20   8398.30     0.00   0.00    0.12   256.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    3.97 100.00
nvme9c9n1     33593.40   8398.35     0.00   0.00    0.12   256.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    3.90  99.96
nvme9n1       33593.00   8398.25     0.00   0.00    0.12   256.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    3.90  99.96


Device            r/s     rMB/s   rrqm/s  %rrqm r_await rareq-sz     w/s     wMB/s   wrqm/s  %wrqm w_await wareq-sz     d/s     dMB/s   drqm/s  %drqm d_await dareq-sz     f/s f_await  aqu-sz  %util
dm-0             0.00      0.00     0.00   0.00    0.00     0.00    0.40      0.00     0.00   0.00    0.00     8.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    0.00   0.00
dm-1             0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    0.00   0.00
dm-2             0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    0.00   0.00
md127         405567.20 100569.55     0.00   0.00    0.12   253.92    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00   47.51 100.00
nvme0c0n1     33796.20   8380.80     4.00   0.01    0.12   253.93    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    3.92  99.90
nvme0n1       33795.80   8380.70     0.00   0.00    0.12   253.93    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    3.93  99.90
nvme10c10n1   33796.40   8380.80     0.00   0.00    0.12   253.93    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    3.90  99.94
nvme10n1      33796.60   8380.85     0.00   0.00    0.12   253.93    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    3.93  99.94
nvme11c11n1   33796.00   8380.75     0.00   0.00    0.12   253.93    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    3.93  99.96
nvme11n1      33796.00   8380.75     0.00   0.00    0.12   253.93    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    3.96  99.96
nvme12c12n1   33796.80   8380.90     4.00   0.01    0.12   253.93    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    3.93  99.92
nvme12n1      33796.20   8380.75     0.00   0.00    0.12   253.93    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    3.94  99.92
nvme1c1n1     33796.20   8380.80     0.00   0.00    0.12   253.93    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    3.92  99.96
nvme1n1       33796.40   8380.85     0.00   0.00    0.12   253.93    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    3.92  99.96
nvme2c2n1     33796.00   8380.75     0.00   0.00    0.12   253.93    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    3.92  99.94
nvme2n1       33796.20   8380.80     0.00   0.00    0.12   253.93    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    3.95  99.92
nvme3c3n1     33796.20   8380.80     0.00   0.00    0.12   253.93    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    3.96 100.00
nvme3n1       33796.60   8380.90     0.00   0.00    0.12   253.93    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    3.97 100.00
nvme4c4n1     33796.60   8380.90     0.00   0.00    0.12   253.93    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    3.92  99.92
nvme4n1       33796.80   8380.95     0.00   0.00    0.12   253.93    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    3.91  99.92
nvme5c5n1        0.00      0.00     0.00   0.00    0.00     0.00    0.40      0.00     0.00   0.00    0.00     8.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    0.00   0.00
nvme5n1          0.00      0.00     0.00   0.00    0.00     0.00    0.40      0.00     0.00   0.00    0.00     8.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    0.00   0.00
nvme6c6n1     33795.80   8380.70     0.00   0.00    0.12   253.93    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    3.91  99.96
nvme6n1       33796.00   8380.75     0.00   0.00    0.12   253.93    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    3.92  99.96
nvme7c7n1     33796.20   8380.75     0.00   0.00    0.12   253.93    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    3.94  99.98
nvme7n1       33796.00   8380.70     0.00   0.00    0.12   253.93    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    3.97  99.96
nvme8c8n1     33795.80   8380.65     4.00   0.01    0.12   253.93    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    3.93 100.00
nvme8n1       33796.00   8380.70     0.00   0.00    0.12   253.93    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    3.94 100.00
nvme9c9n1     33796.20   8380.75     0.00   0.00    0.12   253.93    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    3.93  99.98
nvme9n1       33796.60   8380.85     0.00   0.00    0.12   253.93    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    3.97  99.98



[-- Attachment #3: fedora_42_iostat_dxm_5 --]
[-- Type: application/octet-stream, Size: 23936 bytes --]

Linux 6.14.5-300.fc42.x86_64 (localhost.localdomain) 	05/07/2025 	_x86_64_	(48 CPU)

Device            r/s     rMB/s   rrqm/s  %rrqm r_await rareq-sz     w/s     wMB/s   wrqm/s  %wrqm w_await wareq-sz     d/s     dMB/s   drqm/s  %drqm d_await dareq-sz     f/s f_await  aqu-sz  %util
dm-0             0.06      0.00     0.00   0.00    0.14    25.06    0.09      0.00     0.00   0.00    0.69    30.39    0.00      0.05     0.00   0.00    7.40 283478.13    0.00    0.00    0.00   0.08
md127          236.92     62.75     0.00   0.00    0.10   271.22    0.00      0.00     0.00   0.00    0.70   206.80    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    0.02   0.13
nvme0c0n1       19.74      5.23     0.00   0.00    0.10   271.22    0.00      0.00     0.00   0.00    0.00   176.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    0.00   0.11
nvme0n1         19.74      5.23     0.00   0.00    0.10   271.22    0.00      0.00     0.00   0.00    0.33   176.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    0.00   0.11
nvme10c10n1     19.74      5.23     0.00   0.00    0.10   271.22    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    0.00   0.11
nvme10n1        19.74      5.23     0.00   0.00    0.10   271.22    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    0.00   0.11
nvme11c11n1     19.74      5.23     0.00   0.00    0.10   271.22    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    0.00   0.11
nvme11n1        19.74      5.23     0.00   0.00    0.10   271.22    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    0.00   0.11
nvme12c12n1     19.74      5.23     0.00   0.00    0.10   271.22    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    0.00   0.11
nvme12n1        19.74      5.23     0.00   0.00    0.10   271.22    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    0.00   0.11
nvme1c1n1       19.74      5.23     0.00   0.00    0.10   271.22    0.00      0.00     0.00   0.00    0.00   256.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    0.00   0.11
nvme1n1         19.74      5.23     0.00   0.00    0.10   271.22    0.00      0.00     0.00   0.00    1.00   256.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    0.00   0.11
nvme2c2n1       19.74      5.23     0.00   0.00    0.10   271.22    0.00      0.00     0.00   0.00    0.00   256.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    0.00   0.11
nvme2n1         19.74      5.23     0.00   0.00    0.10   271.22    0.00      0.00     0.00   0.00    1.00   256.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    0.00   0.11
nvme3c3n1       19.74      5.23     0.00   0.00    0.10   271.22    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    0.00   0.11
nvme3n1         19.74      5.23     0.00   0.00    0.10   271.22    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    0.00   0.11
nvme4c4n1       19.74      5.23     0.00   0.00    0.10   271.22    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    0.00   0.11
nvme4n1         19.74      5.23     0.00   0.00    0.10   271.22    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    0.00   0.11
nvme5c5n1       19.74      5.23     0.00   0.00    0.10   271.22    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    0.00   0.11
nvme5n1         19.74      5.23     0.00   0.00    0.10   271.22    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    0.00   0.11
nvme6c6n1        0.06      0.00     0.00   2.32    0.17    23.83    0.10      0.00     0.00   3.34    3.00    27.78    0.00      0.05     0.00  82.76    3.50 1443929.20    0.00    0.00    0.00   0.18
nvme6n1          0.06      0.00     0.00   0.00    0.15    23.83    0.10      0.00     0.00   0.00    2.93    27.78    0.00      0.05     0.00   0.00    3.50 1443929.20    0.00    0.00    0.00   0.04
nvme7c7n1       19.74      5.23     0.00   0.00    0.10   271.22    0.00      0.00     0.00   0.00    0.00   162.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    0.00   0.11
nvme7n1         19.74      5.23     0.00   0.00    0.10   271.22    0.00      0.00     0.00   0.00    0.50   162.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    0.00   0.11
nvme8c8n1       19.74      5.23     0.00   0.00    0.10   271.22    0.00      0.00     0.00   0.00    0.00   192.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    0.00   0.11
nvme8n1         19.74      5.23     0.00   0.00    0.10   271.22    0.00      0.00     0.00   0.00    1.00   192.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    0.00   0.11
nvme9c9n1       19.74      5.23     0.00   0.00    0.10   271.22    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    0.00   0.11
nvme9n1         19.74      5.23     0.00   0.00    0.10   271.22    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    0.00   0.11
zram0            0.00      0.00     0.00   0.00    0.00    21.18    0.00      0.00     0.00   0.00    0.00     4.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    0.00   0.00


Device            r/s     rMB/s   rrqm/s  %rrqm r_await rareq-sz     w/s     wMB/s   wrqm/s  %wrqm w_await wareq-sz     d/s     dMB/s   drqm/s  %drqm d_await dareq-sz     f/s f_await  aqu-sz  %util
dm-0             0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    0.00   0.00
md127         177981.80  44494.15     0.00   0.00    0.10   255.99    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00   18.63 100.00
nvme0c0n1     14831.60   3707.90     1.80   0.01    0.11   256.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    1.56  85.16
nvme0n1       14831.60   3707.90     0.00   0.00    0.10   256.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    1.54  84.14
nvme10c10n1   14831.20   3707.80     1.80   0.01    0.10   256.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    1.55  84.44
nvme10n1      14831.20   3707.80     0.00   0.00    0.10   256.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    1.53  83.56
nvme11c11n1   14831.20   3707.80     0.00   0.00    0.10   256.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    1.55  84.28
nvme11n1      14831.20   3707.80     0.00   0.00    0.10   256.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    1.53  83.08
nvme12c12n1   14831.20   3707.80     0.00   0.00    0.10   256.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    1.55  84.16
nvme12n1      14831.20   3707.80     0.00   0.00    0.10   256.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    1.52  83.14
nvme1c1n1     14831.60   3707.90     0.00   0.00    0.10   256.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    1.56  84.38
nvme1n1       14831.20   3707.80     0.00   0.00    0.10   256.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    1.52  83.62
nvme2c2n1     14831.20   3707.80     0.00   0.00    0.10   256.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    1.55  84.44
nvme2n1       14831.20   3707.80     0.00   0.00    0.10   256.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    1.52  83.62
nvme3c3n1     14831.20   3707.80     0.00   0.00    0.10   256.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    1.55  84.34
nvme3n1       14831.40   3707.85     0.00   0.00    0.10   256.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    1.53  83.56
nvme4c4n1     14831.20   3707.80     0.00   0.00    0.10   256.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    1.55  84.06
nvme4n1       14831.20   3707.80     0.00   0.00    0.10   256.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    1.52  83.20
nvme5c5n1     14831.20   3707.80     1.60   0.01    0.11   256.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    1.57  84.68
nvme5n1       14831.20   3707.80     0.00   0.00    0.10   256.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    1.54  83.96
nvme6c6n1        0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    0.00   0.00
nvme6n1          0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    0.00   0.00
nvme7c7n1     14831.20   3707.80     0.00   0.00    0.11   256.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    1.56  84.16
nvme7n1       14831.00   3707.75     0.00   0.00    0.10   256.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    1.53  83.34
nvme8c8n1     14831.40   3707.85     0.00   0.00    0.11   256.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    1.56  85.84
nvme8n1       14831.20   3707.80     0.00   0.00    0.10   256.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    1.55  85.04
nvme9c9n1     14831.20   3707.80     0.00   0.00    0.10   256.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    1.55  84.30
nvme9n1       14831.60   3707.90     0.00   0.00    0.10   256.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    1.53  83.40
zram0            0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    0.00   0.00


Device            r/s     rMB/s   rrqm/s  %rrqm r_await rareq-sz     w/s     wMB/s   wrqm/s  %wrqm w_await wareq-sz     d/s     dMB/s   drqm/s  %drqm d_await dareq-sz     f/s f_await  aqu-sz  %util
dm-0             0.00      0.00     0.00   0.00    0.00     0.00    8.80      0.12     0.00   0.00    0.02    13.55    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    0.00   0.02
md127         178035.00  44507.40     0.00   0.00    0.10   255.99    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00   18.61 100.00
nvme0c0n1     14835.60   3708.90     1.80   0.01    0.11   256.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    1.56  84.40
nvme0n1       14835.60   3708.90     0.00   0.00    0.10   256.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    1.54  83.58
nvme10c10n1   14836.00   3709.00     1.80   0.01    0.10   256.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    1.56  84.02
nvme10n1      14835.60   3708.90     0.00   0.00    0.10   256.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    1.53  83.32
nvme11c11n1   14835.80   3708.95     0.00   0.00    0.10   256.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    1.55  83.52
nvme11n1      14836.00   3709.00     0.00   0.00    0.10   256.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    1.52  82.90
nvme12c12n1   14836.00   3709.00     0.00   0.00    0.10   256.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    1.56  83.74
nvme12n1      14835.60   3708.90     0.00   0.00    0.10   256.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    1.52  83.08
nvme1c1n1     14835.60   3708.90     0.00   0.00    0.10   256.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    1.55  83.60
nvme1n1       14835.80   3708.95     0.00   0.00    0.10   256.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    1.52  82.90
nvme2c2n1     14836.00   3709.00     0.00   0.00    0.11   256.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    1.56  83.76
nvme2n1       14836.00   3709.00     0.00   0.00    0.10   256.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    1.52  83.10
nvme3c3n1     14836.00   3709.00     0.00   0.00    0.10   256.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    1.55  84.00
nvme3n1       14835.80   3708.95     0.00   0.00    0.10   256.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    1.53  83.44
nvme4c4n1     14835.60   3708.90     0.00   0.00    0.11   256.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    1.57  85.04
nvme4n1       14835.60   3708.90     0.00   0.00    0.10   256.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    1.54  84.20
nvme5c5n1     14835.60   3708.90     1.80   0.01    0.11   256.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    1.56  84.02
nvme5n1       14836.00   3709.00     0.00   0.00    0.10   256.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    1.53  83.38
nvme6c6n1        0.00      0.00     0.00   0.00    0.00     0.00    8.80      0.12     0.00   0.00    0.07    13.55    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    0.00   0.02
nvme6n1          0.00      0.00     0.00   0.00    0.00     0.00    8.80      0.12     0.00   0.00    0.02    13.55    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    0.00   0.02
nvme7c7n1     14835.80   3708.95     0.00   0.00    0.11   256.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    1.56  85.40
nvme7n1       14835.80   3708.95     0.00   0.00    0.10   256.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    1.55  84.72
nvme8c8n1     14835.60   3708.90     0.00   0.00    0.11   256.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    1.56  84.24
nvme8n1       14836.00   3709.00     0.00   0.00    0.10   256.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    1.53  83.56
nvme9c9n1     14836.00   3709.00     0.00   0.00    0.11   256.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    1.56  84.26
nvme9n1       14835.60   3708.90     0.00   0.00    0.10   256.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    1.53  83.30
zram0            0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    0.00   0.00


Device            r/s     rMB/s   rrqm/s  %rrqm r_await rareq-sz     w/s     wMB/s   wrqm/s  %wrqm w_await wareq-sz     d/s     dMB/s   drqm/s  %drqm d_await dareq-sz     f/s f_await  aqu-sz  %util
dm-0             0.00      0.00     0.00   0.00    0.00     0.00    2.40      0.02     0.00   0.00    0.00     6.58    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    0.00   0.00
md127         177958.00  44488.15     0.00   0.00    0.10   255.99    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00   18.56 100.00
nvme0c0n1     14829.60   3707.40     1.80   0.01    0.10   256.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    1.55  84.04
nvme0n1       14829.60   3707.40     0.00   0.00    0.10   256.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    1.52  83.28
nvme10c10n1   14829.40   3707.35     1.80   0.01    0.10   256.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    1.55  84.16
nvme10n1      14829.60   3707.40     0.00   0.00    0.10   256.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    1.52  83.24
nvme11c11n1   14829.40   3707.35     0.00   0.00    0.10   256.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    1.55  83.76
nvme11n1      14829.40   3707.35     0.00   0.00    0.10   256.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    1.52  82.86
nvme12c12n1   14829.20   3707.30     0.00   0.00    0.10   256.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    1.55  84.42
nvme12n1      14829.60   3707.40     0.00   0.00    0.10   256.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    1.53  83.56
nvme1c1n1     14829.60   3707.40     0.00   0.00    0.11   256.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    1.56  84.82
nvme1n1       14829.40   3707.35     0.00   0.00    0.10   256.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    1.53  84.04
nvme2c2n1     14829.20   3707.30     0.00   0.00    0.10   256.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    1.55  83.26
nvme2n1       14829.20   3707.30     0.00   0.00    0.10   256.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    1.51  82.38
nvme3c3n1     14829.20   3707.30     0.00   0.00    0.10   256.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    1.55  84.56
nvme3n1       14829.20   3707.30     0.00   0.00    0.10   256.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    1.53  83.60
nvme4c4n1     14829.60   3707.40     0.00   0.00    0.10   256.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    1.55  84.36
nvme4n1       14829.60   3707.40     0.00   0.00    0.10   256.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    1.52  83.54
nvme5c5n1     14829.60   3707.40     1.80   0.01    0.11   256.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    1.56  84.10
nvme5n1       14829.20   3707.30     0.00   0.00    0.10   256.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    1.54  83.36
nvme6c6n1        0.00      0.00     0.00   0.00    0.00     0.00    2.40      0.02     0.00   0.00    0.08     6.58    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    0.00   0.00
nvme6n1          0.00      0.00     0.00   0.00    0.00     0.00    2.40      0.02     0.00   0.00    0.00     6.58    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    0.00   0.00
nvme7c7n1     14829.40   3707.35     0.00   0.00    0.11   256.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    1.56  84.10
nvme7n1       14829.60   3707.40     0.00   0.00    0.10   256.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    1.53  83.18
nvme8c8n1     14829.60   3707.40     0.00   0.00    0.11   256.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    1.56  84.24
nvme8n1       14829.20   3707.30     0.00   0.00    0.10   256.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    1.53  83.52
nvme9c9n1     14829.20   3707.30     0.00   0.00    0.11   256.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    1.56  84.50
nvme9n1       14829.60   3707.40     0.00   0.00    0.10   256.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    1.53  83.56
zram0            0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00      0.00     0.00   0.00    0.00     0.00    0.00    0.00    0.00   0.00



^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: Sequential read from NVMe/XFS twice slower on Fedora 42 than on Rocky 9.5
  2025-05-07 12:26                 ` Anton Gavriliuk
@ 2025-05-07 21:59                   ` Dave Chinner
  0 siblings, 0 replies; 11+ messages in thread
From: Dave Chinner @ 2025-05-07 21:59 UTC (permalink / raw)
  To: Anton Gavriliuk; +Cc: Laurence Oberman, linux-nvme, linux-xfs, linux-block

On Wed, May 07, 2025 at 03:26:08PM +0300, Anton Gavriliuk wrote:
> > `iostat -dxm 5` output during the fio run on both kernels will give us some indication of the differences in IO patterns, queue depths, etc.
> 
> iostat files attached.

Yeah, that definitely looks like MD is the bottleneck. In both
traces the NVMe drives are completing read IOs in about 110-120us.
In fedora 42, the nvme drives are not at 100% utilisation so the md
device is not feeding them fast enough.

That can also be seen in that the rocky 9.5 kernel with a nvme
device queue depth of about 4 IOs, whilst it is only 1.5 for the
fedora 42 kernel.

Given that nobody from the block/MD side of things has responded
with any ideas yet, you might just have to bisect it to find out
where things went wrong...

-Dave.
-- 
Dave Chinner
david@fromorbit.com

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: Sequential read from NVMe/XFS twice slower on Fedora 42 than on Rocky 9.5
  2025-05-05 17:39           ` Laurence Oberman
@ 2025-05-22 15:07             ` Laurence Oberman
  2025-05-23  9:39               ` Anton Gavriliuk
  0 siblings, 1 reply; 11+ messages in thread
From: Laurence Oberman @ 2025-05-22 15:07 UTC (permalink / raw)
  To: Dave Chinner, Anton Gavriliuk; +Cc: linux-nvme, linux-xfs, linux-block

On Mon, 2025-05-05 at 13:39 -0400, Laurence Oberman wrote:
> On Mon, 2025-05-05 at 09:21 -0400, Laurence Oberman wrote:
> > On Mon, 2025-05-05 at 08:29 -0400, Laurence Oberman wrote:
> > > On Mon, 2025-05-05 at 07:50 +1000, Dave Chinner wrote:
> > > > [cc linux-block]
> > > > 
> > > > [original bug report:
> > > > https://lore.kernel.org/linux-xfs/CAAiJnjoo0--yp47UKZhbu8sNSZN6DZ-QzmZBMmtr1oC=fOOgAQ@mail.gmail.com/
> > > >  ]
> > > > 
> > > > On Sun, May 04, 2025 at 10:22:58AM +0300, Anton Gavriliuk
> > > > wrote:
> > > > > > What's the comparitive performance of an identical read
> > > > > > profile
> > > > > > directly on the raw MD raid0 device?
> > > > > 
> > > > > Rocky 9.5 (5.14.0-503.40.1.el9_5.x86_64)
> > > > > 
> > > > > [root@localhost ~]# df -mh /mnt
> > > > > Filesystem      Size  Used Avail Use% Mounted on
> > > > > /dev/md127       35T  1.3T   34T   4% /mnt
> > > > > 
> > > > > [root@localhost ~]# fio --name=test --rw=read --bs=256k
> > > > > --filename=/dev/md127 --direct=1 --numjobs=1 --iodepth=64 --
> > > > > exitall
> > > > > --group_reporting --ioengine=libaio --runtime=30 --time_based
> > > > > test: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-
> > > > > 256KiB,
> > > > > (T)
> > > > > 256KiB-256KiB, ioengine=libaio, iodepth=64
> > > > > fio-3.39-44-g19d9
> > > > > Starting 1 process
> > > > > Jobs: 1 (f=1): [R(1)][100.0%][r=81.4GiB/s][r=334k IOPS][eta
> > > > > 00m:00s]
> > > > > test: (groupid=0, jobs=1): err= 0: pid=43189: Sun May  4
> > > > > 08:22:12
> > > > > 2025
> > > > >   read: IOPS=363k, BW=88.5GiB/s (95.1GB/s)(2656GiB/30001msec)
> > > > >     slat (nsec): min=971, max=312380, avg=1817.92,
> > > > > stdev=1367.75
> > > > >     clat (usec): min=78, max=1351, avg=174.46, stdev=28.86
> > > > >      lat (usec): min=80, max=1352, avg=176.27, stdev=28.81
> > > > > 
> > > > > Fedora 42 (6.14.5-300.fc42.x86_64)
> > > > > 
> > > > > [root@localhost anton]# df -mh /mnt
> > > > > Filesystem      Size  Used Avail Use% Mounted on
> > > > > /dev/md127       35T  1.3T   34T   4% /mnt
> > > > > 
> > > > > [root@localhost ~]# fio --name=test --rw=read --bs=256k
> > > > > --filename=/dev/md127 --direct=1 --numjobs=1 --iodepth=64 --
> > > > > exitall
> > > > > --group_reporting --ioengine=libaio --runtime=30 --time_based
> > > > > test: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-
> > > > > 256KiB,
> > > > > (T)
> > > > > 256KiB-256KiB, ioengine=libaio, iodepth=64
> > > > > fio-3.39-44-g19d9
> > > > > Starting 1 process
> > > > > Jobs: 1 (f=1): [R(1)][100.0%][r=41.0GiB/s][r=168k IOPS][eta
> > > > > 00m:00s]
> > > > > test: (groupid=0, jobs=1): err= 0: pid=5685: Sun May  4
> > > > > 10:14:00
> > > > > 2025
> > > > >   read: IOPS=168k, BW=41.0GiB/s (44.1GB/s)(1231GiB/30001msec)
> > > > >     slat (usec): min=3, max=273, avg= 5.63, stdev= 1.48
> > > > >     clat (usec): min=67, max=2800, avg=374.99, stdev=29.90
> > > > >      lat (usec): min=72, max=2914, avg=380.62, stdev=30.22
> > > > 
> > > > So the MD block device shows the same read performance as the
> > > > filesystem on top of it. That means this is a regression at the
> > > > MD
> > > > device layer or in the block/driver layers below it. i.e. it is
> > > > not
> > > > an XFS of filesystem issue at all.
> > > > 
> > > > -Dave.
> > > 
> > > I have a lab setup, let me see if I can also reproduce and then
> > > trace
> > > this to see where it is spending the time
> > > 
> > 
> > 
> > Not seeing 1/2 the bandwidth but also significantly slower on
> > Fedora42
> > kernel.
> > I will trace it
> > 
> > 9.5 kernel - 5.14.0-503.40.1.el9_5.x86_64
> > 
> > Run status group 0 (all jobs):
> >    READ: bw=14.7GiB/s (15.8GB/s), 14.7GiB/s-14.7GiB/s (15.8GB/s-
> > 15.8GB/s), io=441GiB (473GB), run=30003-30003msec
> > 
> > Fedora42 kernel - 6.14.5-300.fc42.x86_64
> > 
> > Run status group 0 (all jobs):
> >    READ: bw=10.4GiB/s (11.2GB/s), 10.4GiB/s-10.4GiB/s (11.2GB/s-
> > 11.2GB/s), io=313GiB (336GB), run=30001-30001msec
> > 
> > 
> > 
> > 
> 
> Fedora42 kernel issue
> 
> While my difference is not as severe we do see a consistently lower
> performance on the Fedora
> kernel. (6.14.5-300.fc42.x86_64)
> 
> When I remove the software raid and run against a single NVME we
> converge to be much closer.
> Also latest upstream does not show this regression either.
> 
> Not sure yet what is in our Fedora kernel causing this. 
> We will work it via the Bugzilla
> 
> Regards
> Laurence
> 
> TLDR
> 
> 
> Fedora Kernel
> -------------
> root@penguin9 blktracefedora]# uname -a
> Linux penguin9.2 6.14.5-300.fc42.x86_64 #1 SMP PREEMPT_DYNAMIC Fri
> May
> 2 14:16:46 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
> 
> 5 runs of the fio against /dev/md1
> 
> [root@penguin9 ~]# for i in 1 2 3 4 5
> > do
> > ./run_fio.sh | grep -A1 "Run status group"
> > done
> Run status group 0 (all jobs):
>    READ: bw=11.3GiB/s (12.2GB/s), 11.3GiB/s-11.3GiB/s (12.2GB/s-
> 12.2GB/s), io=679GiB (729GB), run=60001-60001msec
> Run status group 0 (all jobs):
>    READ: bw=11.2GiB/s (12.0GB/s), 11.2GiB/s-11.2GiB/s (12.0GB/s-
> 12.0GB/s), io=669GiB (718GB), run=60001-60001msec
> Run status group 0 (all jobs):
>    READ: bw=11.4GiB/s (12.2GB/s), 11.4GiB/s-11.4GiB/s (12.2GB/s-
> 12.2GB/s), io=682GiB (733GB), run=60001-60001msec
> Run status group 0 (all jobs):
>    READ: bw=11.1GiB/s (11.9GB/s), 11.1GiB/s-11.1GiB/s (11.9GB/s-
> 11.9GB/s), io=664GiB (713GB), run=60001-60001msec
> Run status group 0 (all jobs):
>    READ: bw=11.3GiB/s (12.1GB/s), 11.3GiB/s-11.3GiB/s (12.1GB/s-
> 12.1GB/s), io=678GiB (728GB), run=60001-60001msec
> 
> RHEL9.5
> ------------
> Linux penguin9.2 5.14.0-503.40.1.el9_5.x86_64 #1 SMP PREEMPT_DYNAMIC
> Thu Apr 24 08:27:29 EDT 2025 x86_64 x86_64 x86_64 GNU/Linux
> 
> [root@penguin9 ~]# for i in 1 2 3 4 5; do ./run_fio.sh | grep -A1
> "Run
> status group"; done
> Run status group 0 (all jobs):
>    READ: bw=14.9GiB/s (16.0GB/s), 14.9GiB/s-14.9GiB/s (16.0GB/s-
> 16.0GB/s), io=894GiB (960GB), run=60003-60003msec
> Run status group 0 (all jobs):
>    READ: bw=14.6GiB/s (15.6GB/s), 14.6GiB/s-14.6GiB/s (15.6GB/s-
> 15.6GB/s), io=873GiB (938GB), run=60003-60003msec
> Run status group 0 (all jobs):
>    READ: bw=14.9GiB/s (16.0GB/s), 14.9GiB/s-14.9GiB/s (16.0GB/s-
> 16.0GB/s), io=892GiB (958GB), run=60003-60003msec
> Run status group 0 (all jobs):
>    READ: bw=14.5GiB/s (15.6GB/s), 14.5GiB/s-14.5GiB/s (15.6GB/s-
> 15.6GB/s), io=872GiB (936GB), run=60003-60003msec
> Run status group 0 (all jobs):
>    READ: bw=14.7GiB/s (15.8GB/s), 14.7GiB/s-14.7GiB/s (15.8GB/s-
> 15.8GB/s), io=884GiB (950GB), run=60003-60003msec
> 
> 
> Remove software raid from the layers and test just on a single nvme
> ---------------------------------------------------------------------
> -
> 
> fio --name=test --rw=read --bs=256k --filename=/dev/nvme23n1 --
> direct=1
> --numjobs=1 --iodepth=64 --exitall --group_reporting --
> ioengine=libaio
> --runtime=60 --time_based
> 
> Linux penguin9.2 5.14.0-503.40.1.el9_5.x86_64 #1 SMP PREEMPT_DYNAMIC
> Thu Apr 24 08:27:29 EDT 2025 x86_64 x86_64 x86_64 GNU/Linux
> 
> [root@penguin9 ~]# ./run_nvme_fio.sh
> 
> Run status group 0 (all jobs):
>    READ: bw=3207MiB/s (3363MB/s), 3207MiB/s-3207MiB/s (3363MB/s-
> 3363MB/s), io=188GiB (202GB), run=60005-60005msec
> 
> 
> Back to fedora kernel
> 
> [root@penguin9 ~]# uname -a
> Linux penguin9.2 6.14.5-300.fc42.x86_64 #1 SMP PREEMPT_DYNAMIC Fri
> May
> 2 14:16:46 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
> 
> Within the margin of error
> 
> Run status group 0 (all jobs):
>    READ: bw=3061MiB/s (3210MB/s), 3061MiB/s-3061MiB/s (3210MB/s-
> 3210MB/s), io=179GiB (193GB), run=60006-60006msec
> 
> 
> Try recent upstream kernel
> ---------------------------
> [root@penguin9 ~]# uname -a
> Linux penguin9.2 6.13.0-rc7+ #2 SMP PREEMPT_DYNAMIC Mon May  5
> 10:59:12
> EDT 2025 x86_64 x86_64 x86_64 GNU/Linux
> 
> [root@penguin9 ~]# for i in 1 2 3 4 5; do ./run_fio.sh | grep -A1
> "Run
> status group"; done
> Run status group 0 (all jobs):
>    READ: bw=14.6GiB/s (15.7GB/s), 14.6GiB/s-14.6GiB/s (15.7GB/s-
> 15.7GB/s), io=876GiB (941GB), run=60003-60003msec
> Run status group 0 (all jobs):
>    READ: bw=14.8GiB/s (15.9GB/s), 14.8GiB/s-14.8GiB/s (15.9GB/s-
> 15.9GB/s), io=891GiB (957GB), run=60003-60003msec
> Run status group 0 (all jobs):
>    READ: bw=14.8GiB/s (15.9GB/s), 14.8GiB/s-14.8GiB/s (15.9GB/s-
> 15.9GB/s), io=890GiB (956GB), run=60003-60003msec
> Run status group 0 (all jobs):
>    READ: bw=14.5GiB/s (15.6GB/s), 14.5GiB/s-14.5GiB/s (15.6GB/s-
> 15.6GB/s), io=871GiB (935GB), run=60003-60003msec
> 
> 
> Update to latest upstream
> -------------------------
> 
> [root@penguin9 ~]# uname -a
> Linux penguin9.2 6.15.0-rc5 #1 SMP PREEMPT_DYNAMIC Mon May  5
> 12:18:22
> EDT 2025 x86_64 x86_64 x86_64 GNU/Linux
> 
> Single nvme device is once again fine
> 
> Run status group 0 (all jobs):
>    READ: bw=3061MiB/s (3210MB/s), 3061MiB/s-3061MiB/s (3210MB/s-
> 3210MB/s), io=179GiB (193GB), run=60006-60006msec
> 
> 
> [root@penguin9 ~]# for i in 1 2 3 4 5; do ./run_fio.sh | grep -A1
> "Run
> status group"; done
> Run status group 0 (all jobs):
>    READ: bw=14.7GiB/s (15.7GB/s), 14.7GiB/s-14.7GiB/s (15.7GB/s-
> 15.7GB/s), io=880GiB (945GB), run=60003-60003msec
> Run status group 0 (all jobs):
>    READ: bw=18.1GiB/s (19.4GB/s), 18.1GiB/s-18.1GiB/s (19.4GB/s-
> 19.4GB/s), io=1087GiB (1167GB), run=60003-60003msec
> Run status group 0 (all jobs):
>    READ: bw=18.0GiB/s (19.4GB/s), 18.0GiB/s-18.0GiB/s (19.4GB/s-
> 19.4GB/s), io=1082GiB (1162GB), run=60003-60003msec
> Run status group 0 (all jobs):
>    READ: bw=18.2GiB/s (19.5GB/s), 18.2GiB/s-18.2GiB/s (19.5GB/s-
> 19.5GB/s), io=1090GiB (1170GB), run=60005-60005msec
> 
> 

This fell of my radar, I aologize and I was on PTO last week.
Here is the Fedora kernel to install as mentioned
https://people.redhat.com/loberman/customer/.fedora/

tar hxvf fedora_kernel.tar.xz
rpm -ivh --force --nodeps *.rpm


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: Sequential read from NVMe/XFS twice slower on Fedora 42 than on Rocky 9.5
  2025-05-22 15:07             ` Laurence Oberman
@ 2025-05-23  9:39               ` Anton Gavriliuk
  0 siblings, 0 replies; 11+ messages in thread
From: Anton Gavriliuk @ 2025-05-23  9:39 UTC (permalink / raw)
  To: Laurence Oberman; +Cc: Dave Chinner, linux-nvme, linux-xfs, linux-block

> This fell of my radar, I aologize and I was on PTO last week.
> Here is the Fedora kernel to install as mentioned
> https://people.redhat.com/loberman/customer/.fedora/

> tar hxvf fedora_kernel.tar.xz
> rpm -ivh --force --nodeps *.rpm

Rocky 9.5 kernel is still faster Fedora 42 kernel.

[root@memverge4 ~]# uname -r
6.14.5-300.fc42.x86_64
[root@memverge4 ~]#
[root@memverge4 ~]# cat /etc/*release
NAME="Rocky Linux"
VERSION="9.5 (Blue Onyx)"
ID="rocky"
ID_LIKE="rhel centos fedora"
VERSION_ID="9.5"
PLATFORM_ID="platform:el9"
PRETTY_NAME="Rocky Linux 9.5 (Blue Onyx)"
ANSI_COLOR="0;32"
LOGO="fedora-logo-icon"
CPE_NAME="cpe:/o:rocky:rocky:9::baseos"
HOME_URL="https://rockylinux.org/"
VENDOR_NAME="RESF"
VENDOR_URL="https://resf.org/"
BUG_REPORT_URL="https://bugs.rockylinux.org/"
SUPPORT_END="2032-05-31"
ROCKY_SUPPORT_PRODUCT="Rocky-Linux-9"
ROCKY_SUPPORT_PRODUCT_VERSION="9.5"
REDHAT_SUPPORT_PRODUCT="Rocky Linux"
REDHAT_SUPPORT_PRODUCT_VERSION="9.5"
Rocky Linux release 9.5 (Blue Onyx)
Rocky Linux release 9.5 (Blue Onyx)
Rocky Linux release 9.5 (Blue Onyx)


Block access -

[root@memverge4 ~]# fio --name=test --rw=read --bs=256k
--filename=/dev/md127 --direct=1 --numjobs=1 --iodepth=64 --exitall
--group_reporting --ioengine=libaio --runtime=30 --time_based
test: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T)
256KiB-256KiB, ioengine=libaio, iodepth=64
fio-3.40
Starting 1 process
Jobs: 1 (f=1): [R(1)][100.0%][r=34.7GiB/s][r=142k IOPS][eta 00m:00s]
test: (groupid=0, jobs=1): err= 0: pid=3566: Fri May 23 12:20:18 2025
  read: IOPS=142k, BW=34.7GiB/s (37.2GB/s)(1040GiB/30001msec)
    slat (usec): min=3, max=1065, avg= 6.68, stdev= 2.19
    clat (usec): min=75, max=2712, avg=443.75, stdev=36.12
     lat (usec): min=83, max=2835, avg=450.43, stdev=36.49

File access -

[root@memverge4 ~]# mount /dev/md127 /mnt
[root@memverge4 ~]# fio --name=test --rw=read --bs=256k
--filename=/mnt/testfile --direct=1 --numjobs=1 --iodepth=64 --exitall
--group_reporting --ioengine=libaio --runtime=30 --time_based
test: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T)
256KiB-256KiB, ioengine=libaio, iodepth=64
fio-3.40
Starting 1 process
Jobs: 1 (f=1): [R(1)][100.0%][r=41.4GiB/s][r=169k IOPS][eta 00m:00s]
test: (groupid=0, jobs=1): err= 0: pid=3666: Fri May 23 12:21:33 2025
  read: IOPS=172k, BW=42.1GiB/s (45.2GB/s)(1263GiB/30001msec)
    slat (usec): min=3, max=1054, avg= 5.46, stdev= 1.81
    clat (usec): min=118, max=2500, avg=365.50, stdev=28.08
     lat (usec): min=121, max=2794, avg=370.96, stdev=28.35

Back to latest 9.5 kernel (5.14.0-503.40.1.el9_5.x86_64)

Block access -

[root@memverge4 ~]# fio --name=test --rw=read --bs=256k
--filename=/dev/md127 --direct=1 --numjobs=1 --iodepth=64 --exitall
--group_reporting --ioengine=libaio --runtime=30 --time_based
test: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T)
256KiB-256KiB, ioengine=libaio, iodepth=64
fio-3.40
Starting 1 process
Jobs: 1 (f=1): [R(1)][100.0%][r=70.8GiB/s][r=290k IOPS][eta 00m:00s]
test: (groupid=0, jobs=1): err= 0: pid=6121: Fri May 23 12:35:22 2025
  read: IOPS=287k, BW=70.1GiB/s (75.3GB/s)(2104GiB/30001msec)
    slat (nsec): min=1492, max=165338, avg=3029.64, stdev=1544.70
    clat (usec): min=71, max=1069, avg=219.56, stdev=21.22
     lat (usec): min=74, max=1233, avg=222.59, stdev=21.34

File access -

[root@memverge4 ~]# mount /dev/md127 /mnt
[root@memverge4 ~]# fio --name=test --rw=read --bs=256k
--filename=/mnt/testfile --direct=1 --numjobs=1 --iodepth=64 --exitall
--group_reporting --ioengine=libaio --runtime=30 --time_based
test: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T)
256KiB-256KiB, ioengine=libaio, iodepth=64
fio-3.40
Starting 1 process
Jobs: 1 (f=1): [R(1)][100.0%][r=73.5GiB/s][r=301k IOPS][eta 00m:00s]
test: (groupid=0, jobs=1): err= 0: pid=6200: Fri May 23 12:36:47 2025
  read: IOPS=301k, BW=73.4GiB/s (78.8GB/s)(2201GiB/30001msec)
    slat (nsec): min=1443, max=291427, avg=2951.98, stdev=1952.66
    clat (usec): min=118, max=1449, avg=209.84, stdev=23.13
     lat (usec): min=121, max=1562, avg=212.79, stdev=23.23

Anton

чт, 22 мая 2025 г. в 18:08, Laurence Oberman <loberman@redhat.com>:
>
> On Mon, 2025-05-05 at 13:39 -0400, Laurence Oberman wrote:
> > On Mon, 2025-05-05 at 09:21 -0400, Laurence Oberman wrote:
> > > On Mon, 2025-05-05 at 08:29 -0400, Laurence Oberman wrote:
> > > > On Mon, 2025-05-05 at 07:50 +1000, Dave Chinner wrote:
> > > > > [cc linux-block]
> > > > >
> > > > > [original bug report:
> > > > > https://lore.kernel.org/linux-xfs/CAAiJnjoo0--yp47UKZhbu8sNSZN6DZ-QzmZBMmtr1oC=fOOgAQ@mail.gmail.com/
> > > > >  ]
> > > > >
> > > > > On Sun, May 04, 2025 at 10:22:58AM +0300, Anton Gavriliuk
> > > > > wrote:
> > > > > > > What's the comparitive performance of an identical read
> > > > > > > profile
> > > > > > > directly on the raw MD raid0 device?
> > > > > >
> > > > > > Rocky 9.5 (5.14.0-503.40.1.el9_5.x86_64)
> > > > > >
> > > > > > [root@localhost ~]# df -mh /mnt
> > > > > > Filesystem      Size  Used Avail Use% Mounted on
> > > > > > /dev/md127       35T  1.3T   34T   4% /mnt
> > > > > >
> > > > > > [root@localhost ~]# fio --name=test --rw=read --bs=256k
> > > > > > --filename=/dev/md127 --direct=1 --numjobs=1 --iodepth=64 --
> > > > > > exitall
> > > > > > --group_reporting --ioengine=libaio --runtime=30 --time_based
> > > > > > test: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-
> > > > > > 256KiB,
> > > > > > (T)
> > > > > > 256KiB-256KiB, ioengine=libaio, iodepth=64
> > > > > > fio-3.39-44-g19d9
> > > > > > Starting 1 process
> > > > > > Jobs: 1 (f=1): [R(1)][100.0%][r=81.4GiB/s][r=334k IOPS][eta
> > > > > > 00m:00s]
> > > > > > test: (groupid=0, jobs=1): err= 0: pid=43189: Sun May  4
> > > > > > 08:22:12
> > > > > > 2025
> > > > > >   read: IOPS=363k, BW=88.5GiB/s (95.1GB/s)(2656GiB/30001msec)
> > > > > >     slat (nsec): min=971, max=312380, avg=1817.92,
> > > > > > stdev=1367.75
> > > > > >     clat (usec): min=78, max=1351, avg=174.46, stdev=28.86
> > > > > >      lat (usec): min=80, max=1352, avg=176.27, stdev=28.81
> > > > > >
> > > > > > Fedora 42 (6.14.5-300.fc42.x86_64)
> > > > > >
> > > > > > [root@localhost anton]# df -mh /mnt
> > > > > > Filesystem      Size  Used Avail Use% Mounted on
> > > > > > /dev/md127       35T  1.3T   34T   4% /mnt
> > > > > >
> > > > > > [root@localhost ~]# fio --name=test --rw=read --bs=256k
> > > > > > --filename=/dev/md127 --direct=1 --numjobs=1 --iodepth=64 --
> > > > > > exitall
> > > > > > --group_reporting --ioengine=libaio --runtime=30 --time_based
> > > > > > test: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-
> > > > > > 256KiB,
> > > > > > (T)
> > > > > > 256KiB-256KiB, ioengine=libaio, iodepth=64
> > > > > > fio-3.39-44-g19d9
> > > > > > Starting 1 process
> > > > > > Jobs: 1 (f=1): [R(1)][100.0%][r=41.0GiB/s][r=168k IOPS][eta
> > > > > > 00m:00s]
> > > > > > test: (groupid=0, jobs=1): err= 0: pid=5685: Sun May  4
> > > > > > 10:14:00
> > > > > > 2025
> > > > > >   read: IOPS=168k, BW=41.0GiB/s (44.1GB/s)(1231GiB/30001msec)
> > > > > >     slat (usec): min=3, max=273, avg= 5.63, stdev= 1.48
> > > > > >     clat (usec): min=67, max=2800, avg=374.99, stdev=29.90
> > > > > >      lat (usec): min=72, max=2914, avg=380.62, stdev=30.22
> > > > >
> > > > > So the MD block device shows the same read performance as the
> > > > > filesystem on top of it. That means this is a regression at the
> > > > > MD
> > > > > device layer or in the block/driver layers below it. i.e. it is
> > > > > not
> > > > > an XFS of filesystem issue at all.
> > > > >
> > > > > -Dave.
> > > >
> > > > I have a lab setup, let me see if I can also reproduce and then
> > > > trace
> > > > this to see where it is spending the time
> > > >
> > >
> > >
> > > Not seeing 1/2 the bandwidth but also significantly slower on
> > > Fedora42
> > > kernel.
> > > I will trace it
> > >
> > > 9.5 kernel - 5.14.0-503.40.1.el9_5.x86_64
> > >
> > > Run status group 0 (all jobs):
> > >    READ: bw=14.7GiB/s (15.8GB/s), 14.7GiB/s-14.7GiB/s (15.8GB/s-
> > > 15.8GB/s), io=441GiB (473GB), run=30003-30003msec
> > >
> > > Fedora42 kernel - 6.14.5-300.fc42.x86_64
> > >
> > > Run status group 0 (all jobs):
> > >    READ: bw=10.4GiB/s (11.2GB/s), 10.4GiB/s-10.4GiB/s (11.2GB/s-
> > > 11.2GB/s), io=313GiB (336GB), run=30001-30001msec
> > >
> > >
> > >
> > >
> >
> > Fedora42 kernel issue
> >
> > While my difference is not as severe we do see a consistently lower
> > performance on the Fedora
> > kernel. (6.14.5-300.fc42.x86_64)
> >
> > When I remove the software raid and run against a single NVME we
> > converge to be much closer.
> > Also latest upstream does not show this regression either.
> >
> > Not sure yet what is in our Fedora kernel causing this.
> > We will work it via the Bugzilla
> >
> > Regards
> > Laurence
> >
> > TLDR
> >
> >
> > Fedora Kernel
> > -------------
> > root@penguin9 blktracefedora]# uname -a
> > Linux penguin9.2 6.14.5-300.fc42.x86_64 #1 SMP PREEMPT_DYNAMIC Fri
> > May
> > 2 14:16:46 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
> >
> > 5 runs of the fio against /dev/md1
> >
> > [root@penguin9 ~]# for i in 1 2 3 4 5
> > > do
> > > ./run_fio.sh | grep -A1 "Run status group"
> > > done
> > Run status group 0 (all jobs):
> >    READ: bw=11.3GiB/s (12.2GB/s), 11.3GiB/s-11.3GiB/s (12.2GB/s-
> > 12.2GB/s), io=679GiB (729GB), run=60001-60001msec
> > Run status group 0 (all jobs):
> >    READ: bw=11.2GiB/s (12.0GB/s), 11.2GiB/s-11.2GiB/s (12.0GB/s-
> > 12.0GB/s), io=669GiB (718GB), run=60001-60001msec
> > Run status group 0 (all jobs):
> >    READ: bw=11.4GiB/s (12.2GB/s), 11.4GiB/s-11.4GiB/s (12.2GB/s-
> > 12.2GB/s), io=682GiB (733GB), run=60001-60001msec
> > Run status group 0 (all jobs):
> >    READ: bw=11.1GiB/s (11.9GB/s), 11.1GiB/s-11.1GiB/s (11.9GB/s-
> > 11.9GB/s), io=664GiB (713GB), run=60001-60001msec
> > Run status group 0 (all jobs):
> >    READ: bw=11.3GiB/s (12.1GB/s), 11.3GiB/s-11.3GiB/s (12.1GB/s-
> > 12.1GB/s), io=678GiB (728GB), run=60001-60001msec
> >
> > RHEL9.5
> > ------------
> > Linux penguin9.2 5.14.0-503.40.1.el9_5.x86_64 #1 SMP PREEMPT_DYNAMIC
> > Thu Apr 24 08:27:29 EDT 2025 x86_64 x86_64 x86_64 GNU/Linux
> >
> > [root@penguin9 ~]# for i in 1 2 3 4 5; do ./run_fio.sh | grep -A1
> > "Run
> > status group"; done
> > Run status group 0 (all jobs):
> >    READ: bw=14.9GiB/s (16.0GB/s), 14.9GiB/s-14.9GiB/s (16.0GB/s-
> > 16.0GB/s), io=894GiB (960GB), run=60003-60003msec
> > Run status group 0 (all jobs):
> >    READ: bw=14.6GiB/s (15.6GB/s), 14.6GiB/s-14.6GiB/s (15.6GB/s-
> > 15.6GB/s), io=873GiB (938GB), run=60003-60003msec
> > Run status group 0 (all jobs):
> >    READ: bw=14.9GiB/s (16.0GB/s), 14.9GiB/s-14.9GiB/s (16.0GB/s-
> > 16.0GB/s), io=892GiB (958GB), run=60003-60003msec
> > Run status group 0 (all jobs):
> >    READ: bw=14.5GiB/s (15.6GB/s), 14.5GiB/s-14.5GiB/s (15.6GB/s-
> > 15.6GB/s), io=872GiB (936GB), run=60003-60003msec
> > Run status group 0 (all jobs):
> >    READ: bw=14.7GiB/s (15.8GB/s), 14.7GiB/s-14.7GiB/s (15.8GB/s-
> > 15.8GB/s), io=884GiB (950GB), run=60003-60003msec
> >
> >
> > Remove software raid from the layers and test just on a single nvme
> > ---------------------------------------------------------------------
> > -
> >
> > fio --name=test --rw=read --bs=256k --filename=/dev/nvme23n1 --
> > direct=1
> > --numjobs=1 --iodepth=64 --exitall --group_reporting --
> > ioengine=libaio
> > --runtime=60 --time_based
> >
> > Linux penguin9.2 5.14.0-503.40.1.el9_5.x86_64 #1 SMP PREEMPT_DYNAMIC
> > Thu Apr 24 08:27:29 EDT 2025 x86_64 x86_64 x86_64 GNU/Linux
> >
> > [root@penguin9 ~]# ./run_nvme_fio.sh
> >
> > Run status group 0 (all jobs):
> >    READ: bw=3207MiB/s (3363MB/s), 3207MiB/s-3207MiB/s (3363MB/s-
> > 3363MB/s), io=188GiB (202GB), run=60005-60005msec
> >
> >
> > Back to fedora kernel
> >
> > [root@penguin9 ~]# uname -a
> > Linux penguin9.2 6.14.5-300.fc42.x86_64 #1 SMP PREEMPT_DYNAMIC Fri
> > May
> > 2 14:16:46 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
> >
> > Within the margin of error
> >
> > Run status group 0 (all jobs):
> >    READ: bw=3061MiB/s (3210MB/s), 3061MiB/s-3061MiB/s (3210MB/s-
> > 3210MB/s), io=179GiB (193GB), run=60006-60006msec
> >
> >
> > Try recent upstream kernel
> > ---------------------------
> > [root@penguin9 ~]# uname -a
> > Linux penguin9.2 6.13.0-rc7+ #2 SMP PREEMPT_DYNAMIC Mon May  5
> > 10:59:12
> > EDT 2025 x86_64 x86_64 x86_64 GNU/Linux
> >
> > [root@penguin9 ~]# for i in 1 2 3 4 5; do ./run_fio.sh | grep -A1
> > "Run
> > status group"; done
> > Run status group 0 (all jobs):
> >    READ: bw=14.6GiB/s (15.7GB/s), 14.6GiB/s-14.6GiB/s (15.7GB/s-
> > 15.7GB/s), io=876GiB (941GB), run=60003-60003msec
> > Run status group 0 (all jobs):
> >    READ: bw=14.8GiB/s (15.9GB/s), 14.8GiB/s-14.8GiB/s (15.9GB/s-
> > 15.9GB/s), io=891GiB (957GB), run=60003-60003msec
> > Run status group 0 (all jobs):
> >    READ: bw=14.8GiB/s (15.9GB/s), 14.8GiB/s-14.8GiB/s (15.9GB/s-
> > 15.9GB/s), io=890GiB (956GB), run=60003-60003msec
> > Run status group 0 (all jobs):
> >    READ: bw=14.5GiB/s (15.6GB/s), 14.5GiB/s-14.5GiB/s (15.6GB/s-
> > 15.6GB/s), io=871GiB (935GB), run=60003-60003msec
> >
> >
> > Update to latest upstream
> > -------------------------
> >
> > [root@penguin9 ~]# uname -a
> > Linux penguin9.2 6.15.0-rc5 #1 SMP PREEMPT_DYNAMIC Mon May  5
> > 12:18:22
> > EDT 2025 x86_64 x86_64 x86_64 GNU/Linux
> >
> > Single nvme device is once again fine
> >
> > Run status group 0 (all jobs):
> >    READ: bw=3061MiB/s (3210MB/s), 3061MiB/s-3061MiB/s (3210MB/s-
> > 3210MB/s), io=179GiB (193GB), run=60006-60006msec
> >
> >
> > [root@penguin9 ~]# for i in 1 2 3 4 5; do ./run_fio.sh | grep -A1
> > "Run
> > status group"; done
> > Run status group 0 (all jobs):
> >    READ: bw=14.7GiB/s (15.7GB/s), 14.7GiB/s-14.7GiB/s (15.7GB/s-
> > 15.7GB/s), io=880GiB (945GB), run=60003-60003msec
> > Run status group 0 (all jobs):
> >    READ: bw=18.1GiB/s (19.4GB/s), 18.1GiB/s-18.1GiB/s (19.4GB/s-
> > 19.4GB/s), io=1087GiB (1167GB), run=60003-60003msec
> > Run status group 0 (all jobs):
> >    READ: bw=18.0GiB/s (19.4GB/s), 18.0GiB/s-18.0GiB/s (19.4GB/s-
> > 19.4GB/s), io=1082GiB (1162GB), run=60003-60003msec
> > Run status group 0 (all jobs):
> >    READ: bw=18.2GiB/s (19.5GB/s), 18.2GiB/s-18.2GiB/s (19.5GB/s-
> > 19.5GB/s), io=1090GiB (1170GB), run=60005-60005msec
> >
> >
>
> This fell of my radar, I aologize and I was on PTO last week.
> Here is the Fedora kernel to install as mentioned
> https://people.redhat.com/loberman/customer/.fedora/
>
> tar hxvf fedora_kernel.tar.xz
> rpm -ivh --force --nodeps *.rpm
>

^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2025-05-23  9:39 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
     [not found] <CAAiJnjoo0--yp47UKZhbu8sNSZN6DZ-QzmZBMmtr1oC=fOOgAQ@mail.gmail.com>
     [not found] ` <aBaVsli2AKbIa4We@dread.disaster.area>
     [not found]   ` <CAAiJnjor+=Zn62n09f-aJw2amX2wxQOb-2TB3rea9wDCU7ONoA@mail.gmail.com>
2025-05-04 21:50     ` Sequential read from NVMe/XFS twice slower on Fedora 42 than on Rocky 9.5 Dave Chinner
2025-05-05 12:29       ` Laurence Oberman
2025-05-05 13:21         ` Laurence Oberman
2025-05-05 17:39           ` Laurence Oberman
2025-05-22 15:07             ` Laurence Oberman
2025-05-23  9:39               ` Anton Gavriliuk
2025-05-05 22:56           ` Dave Chinner
2025-05-06 11:03             ` Anton Gavriliuk
2025-05-06 21:46               ` Dave Chinner
2025-05-07 12:26                 ` Anton Gavriliuk
2025-05-07 21:59                   ` Dave Chinner

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).