linux-fsdevel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Brian Foster <bfoster@redhat.com>
To: kernel test robot <lkp@intel.com>
Cc: linux-fsdevel@vger.kernel.org, oe-kbuild-all@lists.linux.dev,
	linux-xfs@vger.kernel.org, linux-mm@kvack.org
Subject: Re: [PATCH 5/7] xfs: fill dirty folios on zero range of unwritten mappings
Date: Fri, 6 Jun 2025 11:20:35 -0400	[thread overview]
Message-ID: <aEMHQ_BJGDPEWk5J@bfoster> (raw)
In-Reply-To: <202506060903.vM8I4O0S-lkp@intel.com>

On Fri, Jun 06, 2025 at 10:02:34AM +0800, kernel test robot wrote:
> Hi Brian,
> 
> kernel test robot noticed the following build errors:
> 
> [auto build test ERROR on brauner-vfs/vfs.all]
> [also build test ERROR on akpm-mm/mm-everything linus/master next-20250605]
> [cannot apply to xfs-linux/for-next v6.15]
> [If your patch is applied to the wrong git tree, kindly drop us a note.
> And when submitting patch, we suggest to use '--base' as documented in
> https://git-scm.com/docs/git-format-patch#_base_tree_information]
> 
> url:    https://github.com/intel-lab-lkp/linux/commits/Brian-Foster/iomap-move-pos-len-BUG_ON-to-after-folio-lookup/20250606-013227
> base:   https://git.kernel.org/pub/scm/linux/kernel/git/vfs/vfs.git vfs.all
> patch link:    https://lore.kernel.org/r/20250605173357.579720-6-bfoster%40redhat.com
> patch subject: [PATCH 5/7] xfs: fill dirty folios on zero range of unwritten mappings
> config: i386-buildonly-randconfig-003-20250606 (https://download.01.org/0day-ci/archive/20250606/202506060903.vM8I4O0S-lkp@intel.com/config)
> compiler: gcc-12 (Debian 12.2.0-14) 12.2.0
> reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20250606/202506060903.vM8I4O0S-lkp@intel.com/reproduce)
> 

The series is currently based on latest master. For some reason when
applied to vfs.all, the iter variable hunk of this patch applies to the
wrong function.

I'm not 100% sure what the conflict is, but if I had to guess after a
quick look at both branches, master looks like it has XFS atomic writes
bits pulled in that touch this area.

Anyways I don't know if the robots expect a different base here given
the combination of vfs (iomap), xfs, and mm, but if nothing else I'll
see if this resolves by the time a v2 comes around...

Brian

> If you fix the issue in a separate patch/commit (i.e. not just a new version of
> the same patch/commit), kindly add following tags
> | Reported-by: kernel test robot <lkp@intel.com>
> | Closes: https://lore.kernel.org/oe-kbuild-all/202506060903.vM8I4O0S-lkp@intel.com/
> 
> All error/warnings (new ones prefixed by >>):
> 
>    fs/xfs/xfs_iomap.c: In function 'xfs_buffered_write_iomap_begin':
> >> fs/xfs/xfs_iomap.c:1602:55: error: 'iter' undeclared (first use in this function)
>     1602 |                         end = iomap_fill_dirty_folios(iter, offset, len);
>          |                                                       ^~~~
>    fs/xfs/xfs_iomap.c:1602:55: note: each undeclared identifier is reported only once for each function it appears in
>    fs/xfs/xfs_iomap.c: In function 'xfs_seek_iomap_begin':
> >> fs/xfs/xfs_iomap.c:1893:34: warning: unused variable 'iter' [-Wunused-variable]
>     1893 |         struct iomap_iter       *iter = container_of(iomap, struct iomap_iter,
>          |                                  ^~~~
> 
> 
> vim +/iter +1602 fs/xfs/xfs_iomap.c
> 
>   1498	
>   1499	static int
>   1500	xfs_buffered_write_iomap_begin(
>   1501		struct inode		*inode,
>   1502		loff_t			offset,
>   1503		loff_t			count,
>   1504		unsigned		flags,
>   1505		struct iomap		*iomap,
>   1506		struct iomap		*srcmap)
>   1507	{
>   1508		struct xfs_inode	*ip = XFS_I(inode);
>   1509		struct xfs_mount	*mp = ip->i_mount;
>   1510		xfs_fileoff_t		offset_fsb = XFS_B_TO_FSBT(mp, offset);
>   1511		xfs_fileoff_t		end_fsb = xfs_iomap_end_fsb(mp, offset, count);
>   1512		struct xfs_bmbt_irec	imap, cmap;
>   1513		struct xfs_iext_cursor	icur, ccur;
>   1514		xfs_fsblock_t		prealloc_blocks = 0;
>   1515		bool			eof = false, cow_eof = false, shared = false;
>   1516		int			allocfork = XFS_DATA_FORK;
>   1517		int			error = 0;
>   1518		unsigned int		lockmode = XFS_ILOCK_EXCL;
>   1519		unsigned int		iomap_flags = 0;
>   1520		u64			seq;
>   1521	
>   1522		if (xfs_is_shutdown(mp))
>   1523			return -EIO;
>   1524	
>   1525		if (xfs_is_zoned_inode(ip))
>   1526			return xfs_zoned_buffered_write_iomap_begin(inode, offset,
>   1527					count, flags, iomap, srcmap);
>   1528	
>   1529		/* we can't use delayed allocations when using extent size hints */
>   1530		if (xfs_get_extsz_hint(ip))
>   1531			return xfs_direct_write_iomap_begin(inode, offset, count,
>   1532					flags, iomap, srcmap);
>   1533	
>   1534		error = xfs_qm_dqattach(ip);
>   1535		if (error)
>   1536			return error;
>   1537	
>   1538		error = xfs_ilock_for_iomap(ip, flags, &lockmode);
>   1539		if (error)
>   1540			return error;
>   1541	
>   1542		if (XFS_IS_CORRUPT(mp, !xfs_ifork_has_extents(&ip->i_df)) ||
>   1543		    XFS_TEST_ERROR(false, mp, XFS_ERRTAG_BMAPIFORMAT)) {
>   1544			xfs_bmap_mark_sick(ip, XFS_DATA_FORK);
>   1545			error = -EFSCORRUPTED;
>   1546			goto out_unlock;
>   1547		}
>   1548	
>   1549		XFS_STATS_INC(mp, xs_blk_mapw);
>   1550	
>   1551		error = xfs_iread_extents(NULL, ip, XFS_DATA_FORK);
>   1552		if (error)
>   1553			goto out_unlock;
>   1554	
>   1555		/*
>   1556		 * Search the data fork first to look up our source mapping.  We
>   1557		 * always need the data fork map, as we have to return it to the
>   1558		 * iomap code so that the higher level write code can read data in to
>   1559		 * perform read-modify-write cycles for unaligned writes.
>   1560		 */
>   1561		eof = !xfs_iext_lookup_extent(ip, &ip->i_df, offset_fsb, &icur, &imap);
>   1562		if (eof)
>   1563			imap.br_startoff = end_fsb; /* fake hole until the end */
>   1564	
>   1565		/* We never need to allocate blocks for zeroing or unsharing a hole. */
>   1566		if ((flags & (IOMAP_UNSHARE | IOMAP_ZERO)) &&
>   1567		    imap.br_startoff > offset_fsb) {
>   1568			xfs_hole_to_iomap(ip, iomap, offset_fsb, imap.br_startoff);
>   1569			goto out_unlock;
>   1570		}
>   1571	
>   1572		/*
>   1573		 * For zeroing, trim extents that extend beyond the EOF block. If a
>   1574		 * delalloc extent starts beyond the EOF block, convert it to an
>   1575		 * unwritten extent.
>   1576		 */
>   1577		if (flags & IOMAP_ZERO) {
>   1578			xfs_fileoff_t eof_fsb = XFS_B_TO_FSB(mp, XFS_ISIZE(ip));
>   1579			u64 end;
>   1580	
>   1581			if (isnullstartblock(imap.br_startblock) &&
>   1582			    offset_fsb >= eof_fsb)
>   1583				goto convert_delay;
>   1584			if (offset_fsb < eof_fsb && end_fsb > eof_fsb)
>   1585				end_fsb = eof_fsb;
>   1586	
>   1587			/*
>   1588			 * Look up dirty folios for unwritten mappings within EOF.
>   1589			 * Providing this bypasses the flush iomap uses to trigger
>   1590			 * extent conversion when unwritten mappings have dirty
>   1591			 * pagecache in need of zeroing.
>   1592			 *
>   1593			 * Trim the mapping to the end pos of the lookup, which in turn
>   1594			 * was trimmed to the end of the batch if it became full before
>   1595			 * the end of the mapping.
>   1596			 */
>   1597			if (imap.br_state == XFS_EXT_UNWRITTEN &&
>   1598			    offset_fsb < eof_fsb) {
>   1599				loff_t len = min(count,
>   1600						 XFS_FSB_TO_B(mp, imap.br_blockcount));
>   1601	
> > 1602				end = iomap_fill_dirty_folios(iter, offset, len);
>   1603				end_fsb = min_t(xfs_fileoff_t, end_fsb,
>   1604						XFS_B_TO_FSB(mp, end));
>   1605			}
>   1606	
>   1607			xfs_trim_extent(&imap, offset_fsb, end_fsb - offset_fsb);
>   1608		}
>   1609	
>   1610		/*
>   1611		 * Search the COW fork extent list even if we did not find a data fork
>   1612		 * extent.  This serves two purposes: first this implements the
>   1613		 * speculative preallocation using cowextsize, so that we also unshare
>   1614		 * block adjacent to shared blocks instead of just the shared blocks
>   1615		 * themselves.  Second the lookup in the extent list is generally faster
>   1616		 * than going out to the shared extent tree.
>   1617		 */
>   1618		if (xfs_is_cow_inode(ip)) {
>   1619			if (!ip->i_cowfp) {
>   1620				ASSERT(!xfs_is_reflink_inode(ip));
>   1621				xfs_ifork_init_cow(ip);
>   1622			}
>   1623			cow_eof = !xfs_iext_lookup_extent(ip, ip->i_cowfp, offset_fsb,
>   1624					&ccur, &cmap);
>   1625			if (!cow_eof && cmap.br_startoff <= offset_fsb) {
>   1626				trace_xfs_reflink_cow_found(ip, &cmap);
>   1627				goto found_cow;
>   1628			}
>   1629		}
>   1630	
>   1631		if (imap.br_startoff <= offset_fsb) {
>   1632			/*
>   1633			 * For reflink files we may need a delalloc reservation when
>   1634			 * overwriting shared extents.   This includes zeroing of
>   1635			 * existing extents that contain data.
>   1636			 */
>   1637			if (!xfs_is_cow_inode(ip) ||
>   1638			    ((flags & IOMAP_ZERO) && imap.br_state != XFS_EXT_NORM)) {
>   1639				trace_xfs_iomap_found(ip, offset, count, XFS_DATA_FORK,
>   1640						&imap);
>   1641				goto found_imap;
>   1642			}
>   1643	
>   1644			xfs_trim_extent(&imap, offset_fsb, end_fsb - offset_fsb);
>   1645	
>   1646			/* Trim the mapping to the nearest shared extent boundary. */
>   1647			error = xfs_bmap_trim_cow(ip, &imap, &shared);
>   1648			if (error)
>   1649				goto out_unlock;
>   1650	
>   1651			/* Not shared?  Just report the (potentially capped) extent. */
>   1652			if (!shared) {
>   1653				trace_xfs_iomap_found(ip, offset, count, XFS_DATA_FORK,
>   1654						&imap);
>   1655				goto found_imap;
>   1656			}
>   1657	
>   1658			/*
>   1659			 * Fork all the shared blocks from our write offset until the
>   1660			 * end of the extent.
>   1661			 */
>   1662			allocfork = XFS_COW_FORK;
>   1663			end_fsb = imap.br_startoff + imap.br_blockcount;
>   1664		} else {
>   1665			/*
>   1666			 * We cap the maximum length we map here to MAX_WRITEBACK_PAGES
>   1667			 * pages to keep the chunks of work done where somewhat
>   1668			 * symmetric with the work writeback does.  This is a completely
>   1669			 * arbitrary number pulled out of thin air.
>   1670			 *
>   1671			 * Note that the values needs to be less than 32-bits wide until
>   1672			 * the lower level functions are updated.
>   1673			 */
>   1674			count = min_t(loff_t, count, 1024 * PAGE_SIZE);
>   1675			end_fsb = xfs_iomap_end_fsb(mp, offset, count);
>   1676	
>   1677			if (xfs_is_always_cow_inode(ip))
>   1678				allocfork = XFS_COW_FORK;
>   1679		}
>   1680	
>   1681		if (eof && offset + count > XFS_ISIZE(ip)) {
>   1682			/*
>   1683			 * Determine the initial size of the preallocation.
>   1684			 * We clean up any extra preallocation when the file is closed.
>   1685			 */
>   1686			if (xfs_has_allocsize(mp))
>   1687				prealloc_blocks = mp->m_allocsize_blocks;
>   1688			else if (allocfork == XFS_DATA_FORK)
>   1689				prealloc_blocks = xfs_iomap_prealloc_size(ip, allocfork,
>   1690							offset, count, &icur);
>   1691			else
>   1692				prealloc_blocks = xfs_iomap_prealloc_size(ip, allocfork,
>   1693							offset, count, &ccur);
>   1694			if (prealloc_blocks) {
>   1695				xfs_extlen_t	align;
>   1696				xfs_off_t	end_offset;
>   1697				xfs_fileoff_t	p_end_fsb;
>   1698	
>   1699				end_offset = XFS_ALLOC_ALIGN(mp, offset + count - 1);
>   1700				p_end_fsb = XFS_B_TO_FSBT(mp, end_offset) +
>   1701						prealloc_blocks;
>   1702	
>   1703				align = xfs_eof_alignment(ip);
>   1704				if (align)
>   1705					p_end_fsb = roundup_64(p_end_fsb, align);
>   1706	
>   1707				p_end_fsb = min(p_end_fsb,
>   1708					XFS_B_TO_FSB(mp, mp->m_super->s_maxbytes));
>   1709				ASSERT(p_end_fsb > offset_fsb);
>   1710				prealloc_blocks = p_end_fsb - end_fsb;
>   1711			}
>   1712		}
>   1713	
>   1714		/*
>   1715		 * Flag newly allocated delalloc blocks with IOMAP_F_NEW so we punch
>   1716		 * them out if the write happens to fail.
>   1717		 */
>   1718		iomap_flags |= IOMAP_F_NEW;
>   1719		if (allocfork == XFS_COW_FORK) {
>   1720			error = xfs_bmapi_reserve_delalloc(ip, allocfork, offset_fsb,
>   1721					end_fsb - offset_fsb, prealloc_blocks, &cmap,
>   1722					&ccur, cow_eof);
>   1723			if (error)
>   1724				goto out_unlock;
>   1725	
>   1726			trace_xfs_iomap_alloc(ip, offset, count, allocfork, &cmap);
>   1727			goto found_cow;
>   1728		}
>   1729	
>   1730		error = xfs_bmapi_reserve_delalloc(ip, allocfork, offset_fsb,
>   1731				end_fsb - offset_fsb, prealloc_blocks, &imap, &icur,
>   1732				eof);
>   1733		if (error)
>   1734			goto out_unlock;
>   1735	
>   1736		trace_xfs_iomap_alloc(ip, offset, count, allocfork, &imap);
>   1737	found_imap:
>   1738		seq = xfs_iomap_inode_sequence(ip, iomap_flags);
>   1739		xfs_iunlock(ip, lockmode);
>   1740		return xfs_bmbt_to_iomap(ip, iomap, &imap, flags, iomap_flags, seq);
>   1741	
>   1742	convert_delay:
>   1743		xfs_iunlock(ip, lockmode);
>   1744		truncate_pagecache(inode, offset);
>   1745		error = xfs_bmapi_convert_delalloc(ip, XFS_DATA_FORK, offset,
>   1746						   iomap, NULL);
>   1747		if (error)
>   1748			return error;
>   1749	
>   1750		trace_xfs_iomap_alloc(ip, offset, count, XFS_DATA_FORK, &imap);
>   1751		return 0;
>   1752	
>   1753	found_cow:
>   1754		if (imap.br_startoff <= offset_fsb) {
>   1755			error = xfs_bmbt_to_iomap(ip, srcmap, &imap, flags, 0,
>   1756					xfs_iomap_inode_sequence(ip, 0));
>   1757			if (error)
>   1758				goto out_unlock;
>   1759		} else {
>   1760			xfs_trim_extent(&cmap, offset_fsb,
>   1761					imap.br_startoff - offset_fsb);
>   1762		}
>   1763	
>   1764		iomap_flags |= IOMAP_F_SHARED;
>   1765		seq = xfs_iomap_inode_sequence(ip, iomap_flags);
>   1766		xfs_iunlock(ip, lockmode);
>   1767		return xfs_bmbt_to_iomap(ip, iomap, &cmap, flags, iomap_flags, seq);
>   1768	
>   1769	out_unlock:
>   1770		xfs_iunlock(ip, lockmode);
>   1771		return error;
>   1772	}
>   1773	
> 
> -- 
> 0-DAY CI Kernel Test Service
> https://github.com/intel/lkp-tests/wiki
> 


  reply	other threads:[~2025-06-06 15:17 UTC|newest]

Thread overview: 40+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-06-05 17:33 [PATCH 0/7] iomap: zero range folio batch support Brian Foster
2025-06-05 17:33 ` [PATCH 1/7] iomap: move pos+len BUG_ON() to after folio lookup Brian Foster
2025-06-09 16:16   ` Darrick J. Wong
2025-06-10  4:20     ` Christoph Hellwig
2025-06-10 12:16       ` Brian Foster
2025-06-05 17:33 ` [PATCH 2/7] filemap: add helper to look up dirty folios in a range Brian Foster
2025-06-09 15:48   ` Darrick J. Wong
2025-06-10  4:21     ` Christoph Hellwig
2025-06-10 12:17     ` Brian Foster
2025-06-10  4:22   ` Christoph Hellwig
2025-06-05 17:33 ` [PATCH 3/7] iomap: optional zero range dirty folio processing Brian Foster
2025-06-09 16:04   ` Darrick J. Wong
2025-06-10  4:27     ` Christoph Hellwig
2025-06-10 12:21       ` Brian Foster
2025-06-10 12:21     ` Brian Foster
2025-06-10 13:29       ` Christoph Hellwig
2025-06-10 14:19         ` Brian Foster
2025-06-11  3:54           ` Christoph Hellwig
2025-06-10 14:55       ` Darrick J. Wong
2025-06-11  3:55         ` Christoph Hellwig
2025-06-12  4:06           ` Darrick J. Wong
2025-06-10  4:27   ` Christoph Hellwig
2025-06-05 17:33 ` [PATCH 4/7] xfs: always trim mapping to requested range for zero range Brian Foster
2025-06-09 16:07   ` Darrick J. Wong
2025-06-05 17:33 ` [PATCH 5/7] xfs: fill dirty folios on zero range of unwritten mappings Brian Foster
2025-06-06  2:02   ` kernel test robot
2025-06-06 15:20     ` Brian Foster [this message]
2025-06-09 16:12   ` Darrick J. Wong
2025-06-10  4:31     ` Christoph Hellwig
2025-06-10 12:24     ` Brian Foster
2025-07-02 18:50       ` Darrick J. Wong
2025-06-05 17:33 ` [PATCH 6/7] iomap: remove old partial eof zeroing optimization Brian Foster
2025-06-10  4:32   ` Christoph Hellwig
2025-06-05 17:33 ` [PATCH RFC 7/7] xfs: error tag to force zeroing on debug kernels Brian Foster
2025-06-10  4:33   ` Christoph Hellwig
2025-06-10 12:26     ` Brian Foster
2025-06-10 13:30       ` Christoph Hellwig
2025-06-10 14:20         ` Brian Foster
2025-06-10 19:12           ` Brian Foster
2025-06-11  3:56             ` Christoph Hellwig

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=aEMHQ_BJGDPEWk5J@bfoster \
    --to=bfoster@redhat.com \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=linux-xfs@vger.kernel.org \
    --cc=lkp@intel.com \
    --cc=oe-kbuild-all@lists.linux.dev \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).