From: "Darrick J. Wong" <djwong@kernel.org>
To: Kundan Kumar <kundanthebest@gmail.com>
Cc: Andrew Morton <akpm@linux-foundation.org>,
Kundan Kumar <kundan.kumar@samsung.com>,
jaegeuk@kernel.org, chao@kernel.org, viro@zeniv.linux.org.uk,
brauner@kernel.org, jack@suse.cz, miklos@szeredi.hu,
agruenba@redhat.com, trondmy@kernel.org, anna@kernel.org,
willy@infradead.org, mcgrof@kernel.org, clm@meta.com,
david@fromorbit.com, amir73il@gmail.com, axboe@kernel.dk,
hch@lst.de, ritesh.list@gmail.com, dave@stgolabs.net,
p.raghav@samsung.com, da.gomez@samsung.com,
linux-f2fs-devel@lists.sourceforge.net,
linux-fsdevel@vger.kernel.org, gfs2@lists.linux.dev,
linux-nfs@vger.kernel.org, linux-mm@kvack.org,
gost.dev@samsung.com
Subject: Re: [PATCH 00/13] Parallelizing filesystem writeback
Date: Wed, 2 Jul 2025 11:43:12 -0700 [thread overview]
Message-ID: <20250702184312.GC9991@frogsfrogsfrogs> (raw)
In-Reply-To: <CALYkqXqs+mw3sqJg5X2K4wn8uo8dnr4uU0jcnnSTbKK9F4AiBA@mail.gmail.com>
On Wed, Jun 25, 2025 at 09:14:51PM +0530, Kundan Kumar wrote:
> >
> > Makes sense. It would be good to test this on a non-SMP machine, if
> > you can find one ;)
> >
>
> Tested with kernel cmdline with maxcpus=1. The parallel writeback falls
> back to 1 thread behavior, showing nochange in BW.
>
> - On PMEM:
> Base XFS : 70.7 MiB/s
> Parallel Writeback XFS : 70.5 MiB/s
> Base EXT4 : 137 MiB/s
> Parallel Writeback EXT4 : 138 MiB/s
>
> - On NVMe:
> Base XFS : 45.2 MiB/s
> Parallel Writeback XFS : 44.5 MiB/s
> Base EXT4 : 81.2 MiB/s
> Parallel Writeback EXT4 : 80.1 MiB/s
>
> >
> > Please test the performance on spinning disks, and with more filesystems?
> >
>
> On a spinning disk, random IO bandwidth remains unchanged, while sequential
> IO performance declines. However, setting nr_wb_ctx = 1 via configurable
> writeback(planned in next version) eliminates the decline.
>
> echo 1 > /sys/class/bdi/8:16/nwritebacks
>
> We can fetch the device queue's rotational property and allocate BDI with
> nr_wb_ctx = 1 for rotational disks. Hope this is a viable solution for
> spinning disks?
Sounds good to me, spinning rust isn't known for iops.
Though: What about a raid0 of spinning rust? Do you see the same
declines for sequential IO?
--D
> - Random IO
> Base XFS : 22.6 MiB/s
> Parallel Writeback XFS : 22.9 MiB/s
> Base EXT4 : 22.5 MiB/s
> Parallel Writeback EXT4 : 20.9 MiB/s
>
> - Sequential IO
> Base XFS : 156 MiB/s
> Parallel Writeback XFS : 133 MiB/s (-14.7%)
> Base EXT4 : 147 MiB/s
> Parallel Writeback EXT4 : 124 MiB/s (-15.6%)
>
> -Kundan
>
next prev parent reply other threads:[~2025-07-02 18:43 UTC|newest]
Thread overview: 40+ messages / expand[flat|nested] mbox.gz Atom feed top
[not found] <CGME20250529113215epcas5p2edd67e7b129621f386be005fdba53378@epcas5p2.samsung.com>
2025-05-29 11:14 ` [PATCH 00/13] Parallelizing filesystem writeback Kundan Kumar
[not found] ` <CGME20250529113219epcas5p4d8ccb25ea910faea7120f092623f321d@epcas5p4.samsung.com>
2025-05-29 11:14 ` [PATCH 01/13] writeback: add infra for parallel writeback Kundan Kumar
[not found] ` <CGME20250529113224epcas5p2eea35fd0ebe445d8ad0471a144714b23@epcas5p2.samsung.com>
2025-05-29 11:14 ` [PATCH 02/13] writeback: add support to initialize and free multiple writeback ctxs Kundan Kumar
[not found] ` <CGME20250529113228epcas5p1db88ab42c2dac0698d715e38bd5e0896@epcas5p1.samsung.com>
2025-05-29 11:14 ` [PATCH 03/13] writeback: link bdi_writeback to its corresponding bdi_writeback_ctx Kundan Kumar
[not found] ` <CGME20250529113232epcas5p4e6f3b2f03d3a5f8fcaace3ddd03298d0@epcas5p4.samsung.com>
2025-05-29 11:14 ` [PATCH 04/13] writeback: affine inode to a writeback ctx within a bdi Kundan Kumar
2025-06-02 14:24 ` Christoph Hellwig
[not found] ` <CGME20250529113236epcas5p2049b6cc3be27d8727ac1f15697987ff5@epcas5p2.samsung.com>
2025-05-29 11:14 ` [PATCH 05/13] writeback: modify bdi_writeback search logic to search across all wb ctxs Kundan Kumar
[not found] ` <CGME20250529113240epcas5p295dcf9a016cc28e5c3e88d698808f645@epcas5p2.samsung.com>
2025-05-29 11:14 ` [PATCH 06/13] writeback: invoke all writeback contexts for flusher and dirtytime writeback Kundan Kumar
[not found] ` <CGME20250529113245epcas5p2978b77ce5ccf2d620f2a9ee5e796bee3@epcas5p2.samsung.com>
2025-05-29 11:14 ` [PATCH 07/13] writeback: modify sync related functions to iterate over all writeback contexts Kundan Kumar
[not found] ` <CGME20250529113249epcas5p38b29d3c6256337eadc2d1644181f9b74@epcas5p3.samsung.com>
2025-05-29 11:14 ` [PATCH 08/13] writeback: add support to collect stats for all writeback ctxs Kundan Kumar
[not found] ` <CGME20250529113253epcas5p1a28e77b2d9824d55f594ccb053725ece@epcas5p1.samsung.com>
2025-05-29 11:15 ` [PATCH 09/13] f2fs: add support in f2fs to handle multiple writeback contexts Kundan Kumar
2025-06-02 14:20 ` Christoph Hellwig
[not found] ` <CGME20250529113257epcas5p4dbaf9c8e2dc362767c8553399632c1ea@epcas5p4.samsung.com>
2025-05-29 11:15 ` [PATCH 10/13] fuse: add support for multiple writeback contexts in fuse Kundan Kumar
2025-06-02 14:21 ` Christoph Hellwig
2025-06-02 15:50 ` Bernd Schubert
2025-06-02 15:55 ` Christoph Hellwig
[not found] ` <CGME20250529113302epcas5p3bdae265288af32172fb7380a727383eb@epcas5p3.samsung.com>
2025-05-29 11:15 ` [PATCH 11/13] gfs2: add support in gfs2 to handle multiple writeback contexts Kundan Kumar
[not found] ` <CGME20250529113306epcas5p3d10606ae4ea7c3491e93bde9ae408c9f@epcas5p3.samsung.com>
2025-05-29 11:15 ` [PATCH 12/13] nfs: add support in nfs " Kundan Kumar
2025-06-02 14:22 ` Christoph Hellwig
[not found] ` <CGME20250529113311epcas5p3c8f1785b34680481e2126fda3ab51ad9@epcas5p3.samsung.com>
2025-05-29 11:15 ` [PATCH 13/13] writeback: set the num of writeback contexts to number of online cpus Kundan Kumar
2025-06-03 14:36 ` kernel test robot
2025-05-30 3:37 ` [PATCH 00/13] Parallelizing filesystem writeback Andrew Morton
2025-06-25 15:44 ` Kundan Kumar
2025-07-02 18:43 ` Darrick J. Wong [this message]
2025-07-03 13:05 ` Christoph Hellwig
2025-07-04 7:02 ` Kundan Kumar
2025-07-07 14:28 ` Christoph Hellwig
2025-07-07 15:47 ` Jan Kara
2025-06-02 14:19 ` Christoph Hellwig
2025-06-03 9:16 ` Anuj Gupta/Anuj Gupta
2025-06-03 13:24 ` Christoph Hellwig
2025-06-03 13:52 ` Anuj gupta
2025-06-03 14:04 ` Christoph Hellwig
2025-06-03 14:05 ` Christoph Hellwig
2025-06-06 5:04 ` Kundan Kumar
2025-06-09 4:00 ` Christoph Hellwig
2025-06-04 9:22 ` Kundan Kumar
2025-06-11 15:51 ` Darrick J. Wong
2025-06-24 5:59 ` Kundan Kumar
2025-07-02 18:44 ` Darrick J. Wong
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20250702184312.GC9991@frogsfrogsfrogs \
--to=djwong@kernel.org \
--cc=agruenba@redhat.com \
--cc=akpm@linux-foundation.org \
--cc=amir73il@gmail.com \
--cc=anna@kernel.org \
--cc=axboe@kernel.dk \
--cc=brauner@kernel.org \
--cc=chao@kernel.org \
--cc=clm@meta.com \
--cc=da.gomez@samsung.com \
--cc=dave@stgolabs.net \
--cc=david@fromorbit.com \
--cc=gfs2@lists.linux.dev \
--cc=gost.dev@samsung.com \
--cc=hch@lst.de \
--cc=jack@suse.cz \
--cc=jaegeuk@kernel.org \
--cc=kundan.kumar@samsung.com \
--cc=kundanthebest@gmail.com \
--cc=linux-f2fs-devel@lists.sourceforge.net \
--cc=linux-fsdevel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=linux-nfs@vger.kernel.org \
--cc=mcgrof@kernel.org \
--cc=miklos@szeredi.hu \
--cc=p.raghav@samsung.com \
--cc=ritesh.list@gmail.com \
--cc=trondmy@kernel.org \
--cc=viro@zeniv.linux.org.uk \
--cc=willy@infradead.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).