From: "Darrick J. Wong" <djwong@kernel.org>
To: Kundan Kumar <kundanthebest@gmail.com>
Cc: Anuj gupta <anuj1072538@gmail.com>,
Christoph Hellwig <hch@lst.de>,
Anuj Gupta/Anuj Gupta <anuj20.g@samsung.com>,
Kundan Kumar <kundan.kumar@samsung.com>,
jaegeuk@kernel.org, chao@kernel.org, viro@zeniv.linux.org.uk,
brauner@kernel.org, jack@suse.cz, miklos@szeredi.hu,
agruenba@redhat.com, trondmy@kernel.org, anna@kernel.org,
akpm@linux-foundation.org, willy@infradead.org,
mcgrof@kernel.org, clm@meta.com, david@fromorbit.com,
amir73il@gmail.com, axboe@kernel.dk, ritesh.list@gmail.com,
dave@stgolabs.net, p.raghav@samsung.com, da.gomez@samsung.com,
linux-f2fs-devel@lists.sourceforge.net,
linux-fsdevel@vger.kernel.org, gfs2@lists.linux.dev,
linux-nfs@vger.kernel.org, linux-mm@kvack.org,
gost.dev@samsung.com
Subject: Re: [PATCH 00/13] Parallelizing filesystem writeback
Date: Wed, 2 Jul 2025 11:44:39 -0700 [thread overview]
Message-ID: <20250702184439.GD9991@frogsfrogsfrogs> (raw)
In-Reply-To: <CALYkqXpOBb1Ak2kEKWbO2Kc5NaGwb4XsX1q4eEaNWmO_4SQq9w@mail.gmail.com>
On Tue, Jun 24, 2025 at 11:29:28AM +0530, Kundan Kumar wrote:
> On Wed, Jun 11, 2025 at 9:21 PM Darrick J. Wong <djwong@kernel.org> wrote:
> >
> > On Wed, Jun 04, 2025 at 02:52:34PM +0530, Kundan Kumar wrote:
> > > > > > For xfs used this command:
> > > > > > xfs_io -c "stat" /mnt/testfile
> > > > > > And for ext4 used this:
> > > > > > filefrag /mnt/testfile
> > > > >
> > > > > filefrag merges contiguous extents, and only counts up for discontiguous
> > > > > mappings, while fsxattr.nextents counts all extent even if they are
> > > > > contiguous. So you probably want to use filefrag for both cases.
> > > >
> > > > Got it — thanks for the clarification. We'll switch to using filefrag
> > > > and will share updated extent count numbers accordingly.
> > >
> > > Using filefrag, we recorded extent counts on xfs and ext4 at three
> > > stages:
> > > a. Just after a 1G random write,
> > > b. After a 30-second wait,
> > > c. After unmounting and remounting the filesystem,
> > >
> > > xfs
> > > Base
> > > a. 6251 b. 2526 c. 2526
> > > Parallel writeback
> > > a. 6183 b. 2326 c. 2326
> >
> > Interesting that the mapping record count goes down...
> >
> > I wonder, you said the xfs filesystem has 4 AGs and 12 cores, so I guess
> > wb_ctx_arr[] is 12? I wonder, do you see a knee point in writeback
> > throughput when the # of wb contexts exceeds the AG count?
> >
> > Though I guess for the (hopefully common) case of pure overwrites, we
> > don't have to do any metadata updates so we wouldn't really hit a
> > scaling limit due to ag count or log contention or whatever. Does that
> > square with what you see?
> >
>
> Hi Darrick,
>
> We analyzed AG count vs. number of writeback contexts to identify any
> knee point. Earlier, wb_ctx_arr[] was fixed at 12; now we varied nr_wb_ctx
> and measured the impact.
>
> We implemented a configurable number of writeback contexts to measure
> throughput more easily. This feature will be exposed in the next series.
> To configure, used: echo <nr_wb_ctx> > /sys/class/bdi/259:2/nwritebacks.
>
> In our test, writing 1G across 12 directories showed improved bandwidth up
> to the number of allocation groups (AGs), mostly a knee point, but gains
> tapered off beyond that. Also, we see a good increase in bandwidth of about
> 16 times from base to nr_wb_ctx = 6.
>
> Base (single threaded) : 9799KiB/s
> Parallel Writeback (nr_wb_ctx = 1) : 9727KiB/s
> Parallel Writeback (nr_wb_ctx = 2) : 18.1MiB/s
> Parallel Writeback (nr_wb_ctx = 3) : 46.4MiB/s
> Parallel Writeback (nr_wb_ctx = 4) : 135MiB/s
> Parallel Writeback (nr_wb_ctx = 5) : 160MiB/s
> Parallel Writeback (nr_wb_ctx = 6) : 163MiB/s
Heh, nice!
> Parallel Writeback (nr_wb_ctx = 7) : 162MiB/s
> Parallel Writeback (nr_wb_ctx = 8) : 154MiB/s
> Parallel Writeback (nr_wb_ctx = 9) : 152MiB/s
> Parallel Writeback (nr_wb_ctx = 10) : 145MiB/s
> Parallel Writeback (nr_wb_ctx = 11) : 145MiB/s
> Parallel Writeback (nr_wb_ctx = 12) : 138MiB/s
>
>
> System config
> ===========
> Number of CPUs = 12
> System RAM = 9G
> For XFS number of AGs = 4
> Used NVMe SSD of 3.84 TB (Enterprise SSD PM1733a)
>
> Script
> =====
> mkfs.xfs -f /dev/nvme0n1
> mount /dev/nvme0n1 /mnt
> echo <num_wb_ctx> > /sys/class/bdi/259:2/nwritebacks
> sync
> echo 3 > /proc/sys/vm/drop_caches
>
> for i in {1..12}; do
> mkdir -p /mnt/dir$i
> done
>
> fio job_nvme.fio
>
> umount /mnt
> echo 3 > /proc/sys/vm/drop_caches
> sync
>
> fio job
> =====
> [global]
> bs=4k
> iodepth=1
> rw=randwrite
> ioengine=io_uring
> nrfiles=12
> numjobs=1 # Each job writes to a different file
> size=1g
> direct=0 # Buffered I/O to trigger writeback
> group_reporting=1
> create_on_open=1
> name=test
>
> [job1]
> directory=/mnt/dir1
>
> [job2]
> directory=/mnt/dir2
> ...
> ...
> [job12]
> directory=/mnt/dir1
>
> > > ext4
> > > Base
> > > a. 7080 b. 7080 c. 11
> > > Parallel writeback
> > > a. 5961 b. 5961 c. 11
> >
> > Hum, that's particularly ... interesting. I wonder what the mapping
> > count behaviors are when you turn off delayed allocation?
> >
> > --D
> >
>
> I attempted to disable delayed allocation by setting allocsize=4096
> during mount (mount -o allocsize=4096 /dev/pmem0 /mnt), but still
> observed a reduction in file fragments after a delay. Is there something
> I'm overlooking?
Not that I know of. Maybe we should just take the win. :)
--D
> -Kundan
>
prev parent reply other threads:[~2025-07-02 18:44 UTC|newest]
Thread overview: 40+ messages / expand[flat|nested] mbox.gz Atom feed top
[not found] <CGME20250529113215epcas5p2edd67e7b129621f386be005fdba53378@epcas5p2.samsung.com>
2025-05-29 11:14 ` [PATCH 00/13] Parallelizing filesystem writeback Kundan Kumar
[not found] ` <CGME20250529113219epcas5p4d8ccb25ea910faea7120f092623f321d@epcas5p4.samsung.com>
2025-05-29 11:14 ` [PATCH 01/13] writeback: add infra for parallel writeback Kundan Kumar
[not found] ` <CGME20250529113224epcas5p2eea35fd0ebe445d8ad0471a144714b23@epcas5p2.samsung.com>
2025-05-29 11:14 ` [PATCH 02/13] writeback: add support to initialize and free multiple writeback ctxs Kundan Kumar
[not found] ` <CGME20250529113228epcas5p1db88ab42c2dac0698d715e38bd5e0896@epcas5p1.samsung.com>
2025-05-29 11:14 ` [PATCH 03/13] writeback: link bdi_writeback to its corresponding bdi_writeback_ctx Kundan Kumar
[not found] ` <CGME20250529113232epcas5p4e6f3b2f03d3a5f8fcaace3ddd03298d0@epcas5p4.samsung.com>
2025-05-29 11:14 ` [PATCH 04/13] writeback: affine inode to a writeback ctx within a bdi Kundan Kumar
2025-06-02 14:24 ` Christoph Hellwig
[not found] ` <CGME20250529113236epcas5p2049b6cc3be27d8727ac1f15697987ff5@epcas5p2.samsung.com>
2025-05-29 11:14 ` [PATCH 05/13] writeback: modify bdi_writeback search logic to search across all wb ctxs Kundan Kumar
[not found] ` <CGME20250529113240epcas5p295dcf9a016cc28e5c3e88d698808f645@epcas5p2.samsung.com>
2025-05-29 11:14 ` [PATCH 06/13] writeback: invoke all writeback contexts for flusher and dirtytime writeback Kundan Kumar
[not found] ` <CGME20250529113245epcas5p2978b77ce5ccf2d620f2a9ee5e796bee3@epcas5p2.samsung.com>
2025-05-29 11:14 ` [PATCH 07/13] writeback: modify sync related functions to iterate over all writeback contexts Kundan Kumar
[not found] ` <CGME20250529113249epcas5p38b29d3c6256337eadc2d1644181f9b74@epcas5p3.samsung.com>
2025-05-29 11:14 ` [PATCH 08/13] writeback: add support to collect stats for all writeback ctxs Kundan Kumar
[not found] ` <CGME20250529113253epcas5p1a28e77b2d9824d55f594ccb053725ece@epcas5p1.samsung.com>
2025-05-29 11:15 ` [PATCH 09/13] f2fs: add support in f2fs to handle multiple writeback contexts Kundan Kumar
2025-06-02 14:20 ` Christoph Hellwig
[not found] ` <CGME20250529113257epcas5p4dbaf9c8e2dc362767c8553399632c1ea@epcas5p4.samsung.com>
2025-05-29 11:15 ` [PATCH 10/13] fuse: add support for multiple writeback contexts in fuse Kundan Kumar
2025-06-02 14:21 ` Christoph Hellwig
2025-06-02 15:50 ` Bernd Schubert
2025-06-02 15:55 ` Christoph Hellwig
[not found] ` <CGME20250529113302epcas5p3bdae265288af32172fb7380a727383eb@epcas5p3.samsung.com>
2025-05-29 11:15 ` [PATCH 11/13] gfs2: add support in gfs2 to handle multiple writeback contexts Kundan Kumar
[not found] ` <CGME20250529113306epcas5p3d10606ae4ea7c3491e93bde9ae408c9f@epcas5p3.samsung.com>
2025-05-29 11:15 ` [PATCH 12/13] nfs: add support in nfs " Kundan Kumar
2025-06-02 14:22 ` Christoph Hellwig
[not found] ` <CGME20250529113311epcas5p3c8f1785b34680481e2126fda3ab51ad9@epcas5p3.samsung.com>
2025-05-29 11:15 ` [PATCH 13/13] writeback: set the num of writeback contexts to number of online cpus Kundan Kumar
2025-06-03 14:36 ` kernel test robot
2025-05-30 3:37 ` [PATCH 00/13] Parallelizing filesystem writeback Andrew Morton
2025-06-25 15:44 ` Kundan Kumar
2025-07-02 18:43 ` Darrick J. Wong
2025-07-03 13:05 ` Christoph Hellwig
2025-07-04 7:02 ` Kundan Kumar
2025-07-07 14:28 ` Christoph Hellwig
2025-07-07 15:47 ` Jan Kara
2025-06-02 14:19 ` Christoph Hellwig
2025-06-03 9:16 ` Anuj Gupta/Anuj Gupta
2025-06-03 13:24 ` Christoph Hellwig
2025-06-03 13:52 ` Anuj gupta
2025-06-03 14:04 ` Christoph Hellwig
2025-06-03 14:05 ` Christoph Hellwig
2025-06-06 5:04 ` Kundan Kumar
2025-06-09 4:00 ` Christoph Hellwig
2025-06-04 9:22 ` Kundan Kumar
2025-06-11 15:51 ` Darrick J. Wong
2025-06-24 5:59 ` Kundan Kumar
2025-07-02 18:44 ` Darrick J. Wong [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20250702184439.GD9991@frogsfrogsfrogs \
--to=djwong@kernel.org \
--cc=agruenba@redhat.com \
--cc=akpm@linux-foundation.org \
--cc=amir73il@gmail.com \
--cc=anna@kernel.org \
--cc=anuj1072538@gmail.com \
--cc=anuj20.g@samsung.com \
--cc=axboe@kernel.dk \
--cc=brauner@kernel.org \
--cc=chao@kernel.org \
--cc=clm@meta.com \
--cc=da.gomez@samsung.com \
--cc=dave@stgolabs.net \
--cc=david@fromorbit.com \
--cc=gfs2@lists.linux.dev \
--cc=gost.dev@samsung.com \
--cc=hch@lst.de \
--cc=jack@suse.cz \
--cc=jaegeuk@kernel.org \
--cc=kundan.kumar@samsung.com \
--cc=kundanthebest@gmail.com \
--cc=linux-f2fs-devel@lists.sourceforge.net \
--cc=linux-fsdevel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=linux-nfs@vger.kernel.org \
--cc=mcgrof@kernel.org \
--cc=miklos@szeredi.hu \
--cc=p.raghav@samsung.com \
--cc=ritesh.list@gmail.com \
--cc=trondmy@kernel.org \
--cc=viro@zeniv.linux.org.uk \
--cc=willy@infradead.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).