From: kernel test robot <lkp@intel.com>
To: Qu Wenruo <wqu@suse.com>, linux-btrfs@vger.kernel.org
Cc: oe-kbuild-all@lists.linux.dev
Subject: Re: [PATCH 3/4] btrfs: subpage: introduce helpers to handle subpage delalloc locking
Date: Tue, 20 Feb 2024 08:52:33 +0800 [thread overview]
Message-ID: <202402200823.Su3xBnia-lkp@intel.com> (raw)
In-Reply-To: <02f5ad17b6415670bea37433c8ca332a06253315.1708322044.git.wqu@suse.com>
Hi Qu,
kernel test robot noticed the following build errors:
[auto build test ERROR on kdave/for-next]
[also build test ERROR on linus/master v6.8-rc5 next-20240219]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch#_base_tree_information]
url: https://github.com/intel-lab-lkp/linux/commits/Qu-Wenruo/btrfs-make-__extent_writepage_io-to-write-specified-range-only/20240219-141053
base: https://git.kernel.org/pub/scm/linux/kernel/git/kdave/linux.git for-next
patch link: https://lore.kernel.org/r/02f5ad17b6415670bea37433c8ca332a06253315.1708322044.git.wqu%40suse.com
patch subject: [PATCH 3/4] btrfs: subpage: introduce helpers to handle subpage delalloc locking
config: x86_64-rhel-8.3 (https://download.01.org/0day-ci/archive/20240220/202402200823.Su3xBnia-lkp@intel.com/config)
compiler: gcc-12 (Debian 12.2.0-14) 12.2.0
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20240220/202402200823.Su3xBnia-lkp@intel.com/reproduce)
If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@intel.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202402200823.Su3xBnia-lkp@intel.com/
All errors (new ones prefixed by >>):
fs/btrfs/subpage.c: In function 'btrfs_folio_set_writer_lock':
>> fs/btrfs/subpage.c:758:60: error: 'struct btrfs_subpage_info' has no member named 'locked_offset'; did you mean 'checked_offset'?
758 | start_bit = subpage_calc_start_bit(fs_info, folio, locked, start, len);
| ^~~~~~
fs/btrfs/subpage.c:374:45: note: in definition of macro 'subpage_calc_start_bit'
374 | start_bit += fs_info->subpage_info->name##_offset; \
| ^~~~
fs/btrfs/subpage.c: In function 'btrfs_subpage_find_writer_locked':
fs/btrfs/subpage.c:785:70: error: 'struct btrfs_subpage_info' has no member named 'locked_offset'; did you mean 'checked_offset'?
785 | const int start_bit = subpage_calc_start_bit(fs_info, folio, locked,
| ^~~~~~
fs/btrfs/subpage.c:374:45: note: in definition of macro 'subpage_calc_start_bit'
374 | start_bit += fs_info->subpage_info->name##_offset; \
| ^~~~
fs/btrfs/subpage.c:787:55: error: 'struct btrfs_subpage_info' has no member named 'locked_offset'; did you mean 'checked_offset'?
787 | const int locked_bitmap_start = subpage_info->locked_offset;
| ^~~~~~~~~~~~~
| checked_offset
vim +758 fs/btrfs/subpage.c
736
737 /*
738 * This is for folio already locked by plain lock_page()/folio_lock(), which
739 * doesn't have any subpage awareness.
740 *
741 * This would populate the involved subpage ranges so that subpage helpers can
742 * properly unlock them.
743 */
744 void btrfs_folio_set_writer_lock(const struct btrfs_fs_info *fs_info,
745 struct folio *folio, u64 start, u32 len)
746 {
747 struct btrfs_subpage *subpage;
748 unsigned long flags;
749 int start_bit;
750 int nbits;
751 int ret;
752
753 ASSERT(folio_test_locked(folio));
754 if (unlikely(!fs_info) || !btrfs_is_subpage(fs_info, folio->mapping))
755 return;
756
757 subpage = folio_get_private(folio);
> 758 start_bit = subpage_calc_start_bit(fs_info, folio, locked, start, len);
759 nbits = len >> fs_info->sectorsize_bits;
760 spin_lock_irqsave(&subpage->lock, flags);
761 /* Target range should not yet be locked. */
762 ASSERT(bitmap_test_range_all_zero(subpage->bitmaps, start_bit, nbits));
763 bitmap_set(subpage->bitmaps, start_bit, nbits);
764 ret = atomic_add_return(nbits, &subpage->writers);
765 ASSERT(ret <= fs_info->subpage_info->bitmap_nr_bits);
766 spin_unlock_irqrestore(&subpage->lock, flags);
767 }
768
--
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki
next prev parent reply other threads:[~2024-02-20 0:52 UTC|newest]
Thread overview: 14+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-02-19 6:08 [PATCH 0/4] btrfs: initial subpage support for zoned devices Qu Wenruo
2024-02-19 6:08 ` [PATCH 1/4] btrfs: make __extent_writepage_io() to write specified range only Qu Wenruo
2024-02-19 6:08 ` [PATCH 2/4] btrfs: lock subpage ranges in one go for writepage_delalloc() Qu Wenruo
2024-02-19 6:08 ` [PATCH 3/4] btrfs: subpage: introduce helpers to handle subpage delalloc locking Qu Wenruo
2024-02-20 0:52 ` kernel test robot [this message]
2024-02-20 1:16 ` Qu Wenruo
2024-02-20 7:58 ` Yujie Liu
2024-02-20 8:26 ` Qu Wenruo
2024-02-20 9:23 ` Yujie Liu
2024-02-19 6:08 ` [PATCH 4/4] btrfs: migrate writepage_delalloc() to use subpage helpers Qu Wenruo
2024-03-04 3:13 ` [PATCH 0/4] btrfs: initial subpage support for zoned devices Qu Wenruo
2024-03-04 5:10 ` Neal Gompa
2024-03-04 7:32 ` Qu Wenruo
2024-03-04 10:51 ` Neal Gompa
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=202402200823.Su3xBnia-lkp@intel.com \
--to=lkp@intel.com \
--cc=linux-btrfs@vger.kernel.org \
--cc=oe-kbuild-all@lists.linux.dev \
--cc=wqu@suse.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox