linux-fsdevel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Bernd Schubert <bschubert@ddn.com>
To: Abhishek Gupta <abhishekmgupta@google.com>,
	Bernd Schubert <bernd@bsbernd.com>
Cc: "linux-fsdevel@vger.kernel.org" <linux-fsdevel@vger.kernel.org>,
	"miklos@szeredi.hu" <miklos@szeredi.hu>,
	Swetha Vadlakonda <swethv@google.com>
Subject: Re: FUSE: [Regression] Fuse legacy path performance scaling lost in v6.14 vs v6.8/6.11 (iodepth scaling with io_uring)
Date: Mon, 8 Dec 2025 17:52:16 +0000	[thread overview]
Message-ID: <bcb930c5-d526-42c9-a538-e645510bb944@ddn.com> (raw)
In-Reply-To: <CAPr64AKYisa=_X5fAB1ozgb3SoarKm19TD3hgwhX9csD92iBzA@mail.gmail.com>

Hi Abhishek,

yes I was able to run it today, will send out a mail later. Sorry,
rather busy with other work.


Best,
Bernd

On 12/8/25 18:43, Abhishek Gupta wrote:
> Hi Bernd,
> 
> Were you able to reproduce the issue locally using the steps I provided?
> Please let me know if you require any further information or assistance.
> 
> Thanks,
> Abhishek
> 
> 
> On Tue, Dec 2, 2025 at 4:12 PM Abhishek Gupta <abhishekmgupta@google.com
> <mailto:abhishekmgupta@google.com>> wrote:
> 
>     Hi Bernd,
> 
>     Apologies for the delay in responding.
> 
>     Here are the steps to reproduce the FUSE performance issue locally
>     using a simple read-bench FUSE filesystem:
> 
>     1. Set up the FUSE Filesystem:
>     git clone https://github.com/jacobsa/fuse.git <https://github.com/
>     jacobsa/fuse.git> jacobsa-fuse
>     cd jacobsa-fuse/samples/mount_readbenchfs
>     # Replace <mnt_dir> with your desired mount point
>     go run mount.go --mount_point <mnt_dir>
> 
>     2. Run Fio Benchmark (iodepth 1):
>     fio  --name=randread --rw=randread --ioengine=io_uring --thread
>     --filename=<mnt_dir>/test --filesize=1G --time_based=1 --runtime=5s
>     --bs=4K --numjobs=1 --iodepth=1 --direct=1 --group_reporting=1
> 
>     3. Run Fio Benchmark (iodepth 4):
>     fio --name=randread --rw=randread --ioengine=io_uring --thread
>     --filename=<mnt_dir>/test --filesize=1G --time_based=1 --runtime=5s
>     --bs=4K --numjobs=1 --iodepth=4 --direct=1 --group_reporting=1
> 
> 
>     Example Results on Kernel 6.14 (Regression Observed)
> 
>     The following output shows the lack of scaling on my machine with
>     Kernel 6.14:
> 
>     Kernel:
>     Linux abhishek-west4a-2504 6.14.0-1019-gcp #20-Ubuntu SMP Wed Oct 15
>     00:41:12 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
> 
>     Iodepth = 1:
>     READ: bw=74.3MiB/s (77.9MB/s), ... io=372MiB (390MB), run=5001-5001msec
> 
>     Iodepth = 4:
>     READ: bw=87.6MiB/s (91.9MB/s), ... io=438MiB (459MB), run=5000-5000msec
> 
>     Thanks,
>     Abhishek
> 
> 
>     On Fri, Nov 28, 2025 at 4:35 AM Bernd Schubert <bernd@bsbernd.com
>     <mailto:bernd@bsbernd.com>> wrote:
>     >
>     > Hi Abhishek,
>     >
>     > On 11/27/25 14:37, Abhishek Gupta wrote:
>     > > Hi Bernd,
>     > >
>     > > Thanks for looking into this.
>     > > Please find below the fio output on 6.11 & 6.14 kernel versions.
>     > >
>     > >
>     > > On kernel 6.11
>     > >
>     > > ~/gcsfuse$ uname -a
>     > > Linux abhishek-c4-192-west4a 6.11.0-1016-gcp #16~24.04.1-Ubuntu SMP
>     > > Wed May 28 02:40:52 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
>     > >
>     > > iodepth = 1
>     > > :~/fio-fio-3.38$ ./fio --name=randread --rw=randread
>     > > --ioengine=io_uring --thread
>     > > --filename_format='/home/abhishekmgupta_google_com/bucket/$jobnum'
>     > > --filesize=1G --time_based=1 --runtime=15s --bs=4K --numjobs=1
>     > > --iodepth=1 --group_reporting=1 --direct=1
>     > > randread: (g=0): rw=randread, bs=(R) 4096B-4096B, (W)
>     4096B-4096B, (T)
>     > > 4096B-4096B, ioengine=io_uring, iodepth=1
>     > > fio-3.38
>     > > Starting 1 thread
>     > > ...
>     > > Run status group 0 (all jobs):
>     > >    READ: bw=3311KiB/s (3391kB/s), 3311KiB/s-3311KiB/s
>     > > (3391kB/s-3391kB/s), io=48.5MiB (50.9MB), run=15001-15001msec
>     > >
>     > > iodepth=4
>     > > :~/fio-fio-3.38$ ./fio --name=randread --rw=randread
>     > > --ioengine=io_uring --thread
>     > > --filename_format='/home/abhishekmgupta_google_com/bucket/$jobnum'
>     > > --filesize=1G --time_based=1 --runtime=15s --bs=4K --numjobs=1
>     > > --iodepth=4 --group_reporting=1 --direct=1
>     > > randread: (g=0): rw=randread, bs=(R) 4096B-4096B, (W)
>     4096B-4096B, (T)
>     > > 4096B-4096B, ioengine=io_uring, iodepth=4
>     > > fio-3.38
>     > > Starting 1 thread
>     > > ...
>     > > Run status group 0 (all jobs):
>     > >    READ: bw=11.0MiB/s (11.6MB/s), 11.0MiB/s-11.0MiB/s
>     > > (11.6MB/s-11.6MB/s), io=166MiB (174MB), run=15002-15002msec
>     > >
>     > >
>     > > On kernel 6.14
>     > >
>     > > :~$ uname -a
>     > > Linux abhishek-west4a-2504 6.14.0-1019-gcp #20-Ubuntu SMP Wed Oct 15
>     > > 00:41:12 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
>     > >
>     > > iodepth=1
>     > > :~$ fio --name=randread --rw=randread --ioengine=io_uring --thread
>     > > --filename_format='/home/abhishekmgupta_google_com/bucket/$jobnum'
>     > > --filesize=1G --time_based=1 --runtime=15s --bs=4K --numjobs=1
>     > > --iodepth=1 --group_reporting=1 --direct=1
>     > > randread: (g=0): rw=randread, bs=(R) 4096B-4096B, (W)
>     4096B-4096B, (T)
>     > > 4096B-4096B, ioengine=io_uring, iodepth=1
>     > > fio-3.38
>     > > Starting 1 thread
>     > > ...
>     > > Run status group 0 (all jobs):
>     > >    READ: bw=3576KiB/s (3662kB/s), 3576KiB/s-3576KiB/s
>     > > (3662kB/s-3662kB/s), io=52.4MiB (54.9MB), run=15001-15001msec
>     > >
>     > > iodepth=4
>     > > :~$ fio --name=randread --rw=randread --ioengine=io_uring --thread
>     > > --filename_format='/home/abhishekmgupta_google_com/bucket/$jobnum'
>     > > --filesize=1G --time_based=1 --runtime=15s --bs=4K --numjobs=1
>     > > --iodepth=4 --group_reporting=1 --direct=1
>     > > randread: (g=0): rw=randread, bs=(R) 4096B-4096B, (W)
>     4096B-4096B, (T)
>     > > 4096B-4096B, ioengine=io_uring, iodepth=4
>     > > fio-3.38
>     > > ...
>     > > Run status group 0 (all jobs):
>     > >    READ: bw=3863KiB/s (3956kB/s), 3863KiB/s-3863KiB/s
>     > > (3956kB/s-3956kB/s), io=56.6MiB (59.3MB), run=15001-15001msec
>     >
>     > assuming I would find some time over the weekend and with the fact
>     that
>     > I don't know anything about google cloud, how can I reproduce this?
>     >
>     >
>     > Thanks,
>     > Bernd
> 


  parent reply	other threads:[~2025-12-08 19:25 UTC|newest]

Thread overview: 9+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-11-26 15:07 FUSE: [Regression] Fuse legacy path performance scaling lost in v6.14 vs v6.8/6.11 (iodepth scaling with io_uring) Abhishek Gupta
2025-11-26 19:11 ` Bernd Schubert
2025-11-27 13:37   ` Abhishek Gupta
2025-11-27 23:05     ` Bernd Schubert
2025-12-02 10:42       ` Abhishek Gupta
     [not found]         ` <CAPr64AKYisa=_X5fAB1ozgb3SoarKm19TD3hgwhX9csD92iBzA@mail.gmail.com>
2025-12-08 17:52           ` Bernd Schubert [this message]
2025-12-08 22:56             ` Bernd Schubert
2025-12-09 17:16               ` Abhishek Gupta
2025-12-15  4:30               ` Joanne Koong

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=bcb930c5-d526-42c9-a538-e645510bb944@ddn.com \
    --to=bschubert@ddn.com \
    --cc=abhishekmgupta@google.com \
    --cc=bernd@bsbernd.com \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=miklos@szeredi.hu \
    --cc=swethv@google.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).