From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 336A436A036; Thu, 29 Jan 2026 22:40:02 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1769726403; cv=none; b=sFb0LVLbpAY+tviiNV5/MJFFPOBiP/I1Kg5FIz8IAlGDMSYe9wqkEdqYhoc3/EIKYAk7DR4ElPoe3YjIHXc9rSsKpDqOe4y8N/iXDRiVOm8lrI7P8aejsdD33tlEb3Y0J6iTUu4qiMBAS7GufViUNGZpxfJCJSlEmxdGtqCr7Qs= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1769726403; c=relaxed/simple; bh=BqkteKKVuUUOcgw6u2zVTG6WPapeUuB+aD1Cf5Sx/8E=; h=Date:From:To:Cc:Subject:Message-ID:References:MIME-Version: Content-Type:Content-Disposition:In-Reply-To; b=QiUlLCrKs3xOpuhHUTmwJWMLnGEHSd+yIRzYaok/6QTW/udfhqFciEjVaKOB9C3U2XeZs1vn5KGO7ffGrGV4Q2OgMKuoTg3arSuZadNy6ac53vVd9NrP02LPSK32ztU391z2L+Dh68YTU9n2M04UDTcIlmnLFLvHIwFFIv6Kr9s= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=gHizMVO2; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="gHizMVO2" Received: by smtp.kernel.org (Postfix) with ESMTPSA id A19DDC4CEF7; Thu, 29 Jan 2026 22:40:02 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1769726402; bh=BqkteKKVuUUOcgw6u2zVTG6WPapeUuB+aD1Cf5Sx/8E=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=gHizMVO2HJWJqiINHz4+5vg3OB8LqGRiJ1ybfQnAWx3pVa4d0q/NrAbYu4FYV5/Ob PA2NpfUU9vT23BAxSp7YX05+L+TgmkMdrzdjY2zSVH1q03vv22GG/Y60MRmp15mwVY GUzlICSHnwhwFyhAKKbgGA/YlqfGHWVrYolBeCwSefMHeN7A2Ca7pSstQUNvcKblta mXSu1MeW6RbdgFswh21LiyZTceTCP+n+0Kwdk5DfEGuvgHeBfbgHJNk0d/ENTnlXh5 QnvbRZOb0v30uqqShEjE2rVXpxoyp2WMucuSKC5twALr7b+ZBOAu6TWdf9q9PJXrD7 j1kVyWf/mdZdQ== Date: Thu, 29 Jan 2026 14:40:02 -0800 From: "Darrick J. Wong" To: Kundan Kumar Cc: viro@zeniv.linux.org.uk, brauner@kernel.org, jack@suse.cz, willy@infradead.org, mcgrof@kernel.org, clm@meta.com, david@fromorbit.com, amir73il@gmail.com, axboe@kernel.dk, hch@lst.de, ritesh.list@gmail.com, dave@stgolabs.net, cem@kernel.org, wangyufei@vivo.com, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-xfs@vger.kernel.org, gost.dev@samsung.com, anuj20.g@samsung.com, vishak.g@samsung.com, joshi.k@samsung.com Subject: Re: [PATCH v3 4/6] xfs: tag folios with AG number during buffered write via iomap attach hook Message-ID: <20260129224002.GF7712@frogsfrogsfrogs> References: <20260116100818.7576-1-kundan.kumar@samsung.com> <20260116100818.7576-5-kundan.kumar@samsung.com> <20260129004745.GC7712@frogsfrogsfrogs> Precedence: bulk X-Mailing-List: linux-fsdevel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20260129004745.GC7712@frogsfrogsfrogs> On Wed, Jan 28, 2026 at 04:47:45PM -0800, Darrick J. Wong wrote: > On Fri, Jan 16, 2026 at 03:38:16PM +0530, Kundan Kumar wrote: > > Use the iomap attach hook to tag folios with their predicted > > allocation group at write time. Mapped extents derive AG directly; > > delalloc and hole cases use a lightweight predictor. > > > > Signed-off-by: Kundan Kumar > > Signed-off-by: Anuj Gupta > > --- > > fs/xfs/xfs_iomap.c | 114 +++++++++++++++++++++++++++++++++++++++++++++ > > 1 file changed, 114 insertions(+) > > > > diff --git a/fs/xfs/xfs_iomap.c b/fs/xfs/xfs_iomap.c > > index 490e12cb99be..3c927ce118fe 100644 > > --- a/fs/xfs/xfs_iomap.c > > +++ b/fs/xfs/xfs_iomap.c > > @@ -12,6 +12,9 @@ > > #include "xfs_trans_resv.h" > > #include "xfs_mount.h" > > #include "xfs_inode.h" > > +#include "xfs_alloc.h" > > +#include "xfs_ag.h" > > +#include "xfs_ag_resv.h" > > #include "xfs_btree.h" > > #include "xfs_bmap_btree.h" > > #include "xfs_bmap.h" > > @@ -92,8 +95,119 @@ xfs_iomap_valid( > > return true; > > } > > > > +static xfs_agnumber_t > > +xfs_predict_delalloc_agno(const struct xfs_inode *ip, loff_t pos, loff_t len) > > +{ > > + struct xfs_mount *mp = ip->i_mount; > > + xfs_agnumber_t start_agno, agno, best_agno; > > + struct xfs_perag *pag; > > + > > + xfs_extlen_t free, resv, avail; > > + xfs_extlen_t need_fsbs, min_free_fsbs; > > + xfs_extlen_t best_free = 0; > > + xfs_agnumber_t agcount = mp->m_sb.sb_agcount; > > + > > + /* RT inodes allocate from the realtime volume */ > > + if (XFS_IS_REALTIME_INODE(ip)) > > + return XFS_INO_TO_AGNO(mp, ip->i_ino); > > + > > + start_agno = XFS_INO_TO_AGNO(mp, ip->i_ino); > > + > > + /* > > + * size-based minimum free requirement. > > + * Convert bytes to fsbs and require some slack. > > + */ > > + need_fsbs = XFS_B_TO_FSB(mp, (xfs_fsize_t)len); > > + min_free_fsbs = need_fsbs + max_t(xfs_extlen_t, need_fsbs >> 2, 128); > > + > > + /* > > + * scan AGs starting at start_agno and wrapping. > > + * Pick the first AG that meets min_free_fsbs after reservations. > > + * Keep a "best" fallback = maximum (free - resv). > > + */ > > + best_agno = start_agno; > > + > > + for (xfs_agnumber_t i = 0; i < agcount; i++) { > > + agno = (start_agno + i) % agcount; > > + pag = xfs_perag_get(mp, agno); > > + > > + if (!xfs_perag_initialised_agf(pag)) > > + goto next; > > + > > + free = READ_ONCE(pag->pagf_freeblks); > > + resv = xfs_ag_resv_needed(pag, XFS_AG_RESV_NONE); > > + > > + if (free <= resv) > > + goto next; > > + > > + avail = free - resv; > > + > > + if (avail >= min_free_fsbs) { > > + xfs_perag_put(pag); > > + return agno; > > + } > > + > > + if (avail > best_free) { > > + best_free = avail; > > + best_agno = agno; > > + } > > +next: > > + xfs_perag_put(pag); > > + } > > + > > + return best_agno; > > +} > > + > > +static inline xfs_agnumber_t xfs_ag_from_iomap(const struct xfs_mount *mp, > > + const struct iomap *iomap, > > + const struct xfs_inode *ip, loff_t pos, size_t len) > > +{ > > + if (iomap->type == IOMAP_MAPPED || iomap->type == IOMAP_UNWRITTEN) { > > + /* iomap->addr is byte address on device for buffered I/O */ > > + xfs_fsblock_t fsb = XFS_BB_TO_FSBT(mp, BTOBB(iomap->addr)); > > + > > + return XFS_FSB_TO_AGNO(mp, fsb); Also, what happens if this is a realtime file? For pre-rtgroups filesystems there is no group number to use; and for rtgroups you have to use xfs_rtb_to_rgno. The i_ag_dirty_bitmap and the m_ag_wb array will be the wrong size if rgcount != agcount; and also you probably don't want to have in the same per-group writeback list two inodes with folios having the same group number but writing to two different devices (data vs. rt). --D > > + } else if (iomap->type == IOMAP_HOLE || iomap->type == IOMAP_DELALLOC) { > > + return xfs_predict_delalloc_agno(ip, pos, len); > > Is it worth doing an AG scan to guess where the allocation might come > from? The predictions could turn out to be wrong by virtue of other > delalloc regions being written back between the time that xfs_agp_set is > called, and the actual bmapi_write call. > > > + } > > + > > + return XFS_INO_TO_AGNO(mp, ip->i_ino); > > +} > > + > > +static void xfs_agp_set(struct xfs_inode *ip, pgoff_t index, > > + xfs_agnumber_t agno, u8 type) > > +{ > > + u32 packed = xfs_agp_pack((u32)agno, type, true); > > + > > + /* store as immediate value */ > > + xa_store(&ip->i_ag_pmap, index, xa_mk_value(packed), GFP_NOFS); > > + > > + /* Mark this AG as having potential dirty work */ > > + if (ip->i_ag_dirty_bitmap && (u32)agno < ip->i_ag_dirty_bits) > > + set_bit((u32)agno, ip->i_ag_dirty_bitmap); > > +} > > + > > +static void > > +xfs_iomap_tag_folio(const struct iomap *iomap, struct folio *folio, > > + loff_t pos, size_t len) > > +{ > > + struct inode *inode; > > + struct xfs_inode *ip; > > + struct xfs_mount *mp; > > + xfs_agnumber_t agno; > > + > > + inode = folio_mapping(folio)->host; > > + ip = XFS_I(inode); > > + mp = ip->i_mount; > > + > > + agno = xfs_ag_from_iomap(mp, iomap, ip, pos, len); > > + > > + xfs_agp_set(ip, folio->index, agno, (u8)iomap->type); > > Hrm, so no, the ag_pmap only caches the ag number for the index of a > folio, even if it spans many many blocks. > > --D > > > +} > > + > > const struct iomap_write_ops xfs_iomap_write_ops = { > > .iomap_valid = xfs_iomap_valid, > > + .tag_folio = xfs_iomap_tag_folio, > > }; > > > > int > > -- > > 2.25.1 > > > > >