* [dhowells-fs:netfs-next 10/16] fs/netfs/write_issue.c:850 netfs_writeback_single() warn: inconsistent returns '&ictx->wb_lock'.
@ 2026-03-02 9:04 ` Dan Carpenter
0 siblings, 0 replies; 5+ messages in thread
From: kernel test robot @ 2026-03-01 4:48 UTC (permalink / raw)
To: oe-kbuild; +Cc: lkp, Dan Carpenter
BCC: lkp@intel.com
CC: oe-kbuild-all@lists.linux.dev
TO: David Howells <dhowells@redhat.com>
tree: https://git.kernel.org/pub/scm/linux/kernel/git/dhowells/linux-fs.git netfs-next
head: 64a2722c2705a6c6f46b7214477a4a0a2106c1b6
commit: a1cee75a302b7af9ddd2449d38863fb935066e80 [10/16] netfs: Switch to using bvecq rather than folio_queue and rolling_buffer
:::::: branch date: 6 hours ago
:::::: commit date: 7 hours ago
config: x86_64-randconfig-r073-20260301 (https://download.01.org/0day-ci/archive/20260301/202603011243.LsWeX4QE-lkp@intel.com/config)
compiler: clang version 20.1.8 (https://github.com/llvm/llvm-project 87f0227cb60147a26a1eeb4fb06e3b505e9c7261)
rustc: rustc 1.88.0 (6b00bc388 2025-06-23)
smatch version: v0.5.0-8994-gd50c5a4c
If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@intel.com>
| Reported-by: Dan Carpenter <error27@gmail.com>
| Closes: https://lore.kernel.org/r/202603011243.LsWeX4QE-lkp@intel.com/
smatch warnings:
fs/netfs/write_issue.c:850 netfs_writeback_single() warn: inconsistent returns '&ictx->wb_lock'.
vim +850 fs/netfs/write_issue.c
288ace2f57c9d0 David Howells 2024-03-18 763
49866ce7ea8d41 David Howells 2024-12-16 764 /**
49866ce7ea8d41 David Howells 2024-12-16 765 * netfs_writeback_single - Write back a monolithic payload
49866ce7ea8d41 David Howells 2024-12-16 766 * @mapping: The mapping to write from
49866ce7ea8d41 David Howells 2024-12-16 767 * @wbc: Hints from the VM
4e45977f1ea9ab David Howells 2026-01-07 768 * @iter: Data to write.
49866ce7ea8d41 David Howells 2024-12-16 769 *
49866ce7ea8d41 David Howells 2024-12-16 770 * Write a monolithic, non-pagecache object back to the server and/or
a1cee75a302b7a David Howells 2026-02-28 771 * the cache. There's a maximum of one subrequest per stream.
49866ce7ea8d41 David Howells 2024-12-16 772 */
49866ce7ea8d41 David Howells 2024-12-16 773 int netfs_writeback_single(struct address_space *mapping,
49866ce7ea8d41 David Howells 2024-12-16 774 struct writeback_control *wbc,
49866ce7ea8d41 David Howells 2024-12-16 775 struct iov_iter *iter)
49866ce7ea8d41 David Howells 2024-12-16 776 {
49866ce7ea8d41 David Howells 2024-12-16 777 struct netfs_io_request *wreq;
49866ce7ea8d41 David Howells 2024-12-16 778 struct netfs_inode *ictx = netfs_inode(mapping->host);
49866ce7ea8d41 David Howells 2024-12-16 779 int ret;
49866ce7ea8d41 David Howells 2024-12-16 780
49866ce7ea8d41 David Howells 2024-12-16 781 if (!mutex_trylock(&ictx->wb_lock)) {
49866ce7ea8d41 David Howells 2024-12-16 782 if (wbc->sync_mode == WB_SYNC_NONE) {
49866ce7ea8d41 David Howells 2024-12-16 783 netfs_stat(&netfs_n_wb_lock_skip);
49866ce7ea8d41 David Howells 2024-12-16 784 return 0;
49866ce7ea8d41 David Howells 2024-12-16 785 }
49866ce7ea8d41 David Howells 2024-12-16 786 netfs_stat(&netfs_n_wb_lock_wait);
49866ce7ea8d41 David Howells 2024-12-16 787 mutex_lock(&ictx->wb_lock);
49866ce7ea8d41 David Howells 2024-12-16 788 }
49866ce7ea8d41 David Howells 2024-12-16 789
49866ce7ea8d41 David Howells 2024-12-16 790 wreq = netfs_create_write_req(mapping, NULL, 0, NETFS_WRITEBACK_SINGLE);
49866ce7ea8d41 David Howells 2024-12-16 791 if (IS_ERR(wreq)) {
49866ce7ea8d41 David Howells 2024-12-16 792 ret = PTR_ERR(wreq);
49866ce7ea8d41 David Howells 2024-12-16 793 goto couldnt_start;
49866ce7ea8d41 David Howells 2024-12-16 794 }
4e45977f1ea9ab David Howells 2026-01-07 795 wreq->len = iov_iter_count(iter);
4e45977f1ea9ab David Howells 2026-01-07 796
a1cee75a302b7a David Howells 2026-02-28 797 ret = netfs_extract_iter(iter, wreq->len, INT_MAX, 0, &wreq->dispatch_cursor.bvecq, 0);
a1cee75a302b7a David Howells 2026-02-28 798 if (ret < 0)
a1cee75a302b7a David Howells 2026-02-28 799 goto cleanup_free;
a1cee75a302b7a David Howells 2026-02-28 800 if (ret < wreq->len) {
a1cee75a302b7a David Howells 2026-02-28 801 ret = -EIO;
a1cee75a302b7a David Howells 2026-02-28 802 goto cleanup_free;
a1cee75a302b7a David Howells 2026-02-28 803 }
a1cee75a302b7a David Howells 2026-02-28 804
a1cee75a302b7a David Howells 2026-02-28 805 bvecq_pos_attach(&wreq->collect_cursor, &wreq->dispatch_cursor);
a1cee75a302b7a David Howells 2026-02-28 806
2b1424cd131cfa David Howells 2025-05-19 807 __set_bit(NETFS_RREQ_OFFLOAD_COLLECTION, &wreq->flags);
20d72b00ca814d David Howells 2025-05-19 808 trace_netfs_write(wreq, netfs_write_trace_writeback_single);
49866ce7ea8d41 David Howells 2024-12-16 809 netfs_stat(&netfs_n_wh_writepages);
49866ce7ea8d41 David Howells 2024-12-16 810
49866ce7ea8d41 David Howells 2024-12-16 811 if (__test_and_set_bit(NETFS_RREQ_UPLOAD_TO_SERVER, &wreq->flags))
49866ce7ea8d41 David Howells 2024-12-16 812 wreq->netfs_ops->begin_writeback(wreq);
49866ce7ea8d41 David Howells 2024-12-16 813
4e45977f1ea9ab David Howells 2026-01-07 814 for (int s = 0; s < NR_IO_STREAMS; s++) {
4e45977f1ea9ab David Howells 2026-01-07 815 struct netfs_io_subrequest *subreq;
4e45977f1ea9ab David Howells 2026-01-07 816 struct netfs_io_stream *stream = &wreq->io_streams[s];
49866ce7ea8d41 David Howells 2024-12-16 817
4e45977f1ea9ab David Howells 2026-01-07 818 if (!stream->avail)
4e45977f1ea9ab David Howells 2026-01-07 819 continue;
49866ce7ea8d41 David Howells 2024-12-16 820
4e45977f1ea9ab David Howells 2026-01-07 821 netfs_prepare_write(wreq, stream, 0);
4e45977f1ea9ab David Howells 2026-01-07 822
4e45977f1ea9ab David Howells 2026-01-07 823 subreq = stream->construct;
4e45977f1ea9ab David Howells 2026-01-07 824 subreq->len = wreq->len;
4e45977f1ea9ab David Howells 2026-01-07 825 stream->submit_len = subreq->len;
4e45977f1ea9ab David Howells 2026-01-07 826
4e45977f1ea9ab David Howells 2026-01-07 827 netfs_issue_write(wreq, stream);
49866ce7ea8d41 David Howells 2024-12-16 828 }
49866ce7ea8d41 David Howells 2024-12-16 829
a1cee75a302b7a David Howells 2026-02-28 830 wreq->submitted = wreq->len;
49866ce7ea8d41 David Howells 2024-12-16 831 smp_wmb(); /* Write lists before ALL_QUEUED. */
49866ce7ea8d41 David Howells 2024-12-16 832 set_bit(NETFS_RREQ_ALL_QUEUED, &wreq->flags);
49866ce7ea8d41 David Howells 2024-12-16 833
49866ce7ea8d41 David Howells 2024-12-16 834 mutex_unlock(&ictx->wb_lock);
2b1424cd131cfa David Howells 2025-05-19 835 netfs_wake_collector(wreq);
49866ce7ea8d41 David Howells 2024-12-16 836
4e45977f1ea9ab David Howells 2026-01-07 837 /* TODO: Might want to be async here if WB_SYNC_NONE, but then need to
4e45977f1ea9ab David Howells 2026-01-07 838 * wait before modifying.
4e45977f1ea9ab David Howells 2026-01-07 839 */
4e45977f1ea9ab David Howells 2026-01-07 840 ret = netfs_wait_for_write(wreq);
4e45977f1ea9ab David Howells 2026-01-07 841
a1cee75a302b7a David Howells 2026-02-28 842 cleanup_free:
20d72b00ca814d David Howells 2025-05-19 843 netfs_put_request(wreq, netfs_rreq_trace_put_return);
49866ce7ea8d41 David Howells 2024-12-16 844 _leave(" = %d", ret);
49866ce7ea8d41 David Howells 2024-12-16 845 return ret;
49866ce7ea8d41 David Howells 2024-12-16 846
49866ce7ea8d41 David Howells 2024-12-16 847 couldnt_start:
49866ce7ea8d41 David Howells 2024-12-16 848 mutex_unlock(&ictx->wb_lock);
49866ce7ea8d41 David Howells 2024-12-16 849 _leave(" = %d", ret);
49866ce7ea8d41 David Howells 2024-12-16 @850 return ret;
:::::: The code at line 850 was first introduced by commit
:::::: 49866ce7ea8d41a3dc198f519cc9caa2d6be1891 netfs: Add support for caching single monolithic objects such as AFS dirs
:::::: TO: David Howells <dhowells@redhat.com>
:::::: CC: Christian Brauner <brauner@kernel.org>
--
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki
^ permalink raw reply [flat|nested] 5+ messages in thread* [dhowells-fs:netfs-next 10/16] fs/netfs/write_issue.c:850 netfs_writeback_single() warn: inconsistent returns '&ictx->wb_lock'.
@ 2026-03-02 9:04 ` Dan Carpenter
0 siblings, 0 replies; 5+ messages in thread
From: Dan Carpenter @ 2026-03-02 9:04 UTC (permalink / raw)
To: oe-kbuild, David Howells; +Cc: lkp, oe-kbuild-all
tree: https://git.kernel.org/pub/scm/linux/kernel/git/dhowells/linux-fs.git netfs-next
head: 64a2722c2705a6c6f46b7214477a4a0a2106c1b6
commit: a1cee75a302b7af9ddd2449d38863fb935066e80 [10/16] netfs: Switch to using bvecq rather than folio_queue and rolling_buffer
config: x86_64-randconfig-r073-20260301 (https://download.01.org/0day-ci/archive/20260301/202603011243.LsWeX4QE-lkp@intel.com/config)
compiler: clang version 20.1.8 (https://github.com/llvm/llvm-project 87f0227cb60147a26a1eeb4fb06e3b505e9c7261)
rustc: rustc 1.88.0 (6b00bc388 2025-06-23)
smatch version: v0.5.0-8994-gd50c5a4c
If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@intel.com>
| Reported-by: Dan Carpenter <dan.carpenter@linaro.org>
| Closes: https://lore.kernel.org/r/202603011243.LsWeX4QE-lkp@intel.com/
smatch warnings:
fs/netfs/write_issue.c:850 netfs_writeback_single() warn: inconsistent returns '&ictx->wb_lock'.
vim +850 fs/netfs/write_issue.c
49866ce7ea8d41 David Howells 2024-12-16 773 int netfs_writeback_single(struct address_space *mapping,
49866ce7ea8d41 David Howells 2024-12-16 774 struct writeback_control *wbc,
49866ce7ea8d41 David Howells 2024-12-16 775 struct iov_iter *iter)
49866ce7ea8d41 David Howells 2024-12-16 776 {
49866ce7ea8d41 David Howells 2024-12-16 777 struct netfs_io_request *wreq;
49866ce7ea8d41 David Howells 2024-12-16 778 struct netfs_inode *ictx = netfs_inode(mapping->host);
49866ce7ea8d41 David Howells 2024-12-16 779 int ret;
49866ce7ea8d41 David Howells 2024-12-16 780
49866ce7ea8d41 David Howells 2024-12-16 781 if (!mutex_trylock(&ictx->wb_lock)) {
49866ce7ea8d41 David Howells 2024-12-16 782 if (wbc->sync_mode == WB_SYNC_NONE) {
49866ce7ea8d41 David Howells 2024-12-16 783 netfs_stat(&netfs_n_wb_lock_skip);
49866ce7ea8d41 David Howells 2024-12-16 784 return 0;
Unlock before returning?
49866ce7ea8d41 David Howells 2024-12-16 785 }
49866ce7ea8d41 David Howells 2024-12-16 786 netfs_stat(&netfs_n_wb_lock_wait);
49866ce7ea8d41 David Howells 2024-12-16 787 mutex_lock(&ictx->wb_lock);
49866ce7ea8d41 David Howells 2024-12-16 788 }
49866ce7ea8d41 David Howells 2024-12-16 789
49866ce7ea8d41 David Howells 2024-12-16 790 wreq = netfs_create_write_req(mapping, NULL, 0, NETFS_WRITEBACK_SINGLE);
49866ce7ea8d41 David Howells 2024-12-16 791 if (IS_ERR(wreq)) {
49866ce7ea8d41 David Howells 2024-12-16 792 ret = PTR_ERR(wreq);
49866ce7ea8d41 David Howells 2024-12-16 793 goto couldnt_start;
49866ce7ea8d41 David Howells 2024-12-16 794 }
4e45977f1ea9ab David Howells 2026-01-07 795 wreq->len = iov_iter_count(iter);
4e45977f1ea9ab David Howells 2026-01-07 796
a1cee75a302b7a David Howells 2026-02-28 797 ret = netfs_extract_iter(iter, wreq->len, INT_MAX, 0, &wreq->dispatch_cursor.bvecq, 0);
a1cee75a302b7a David Howells 2026-02-28 798 if (ret < 0)
a1cee75a302b7a David Howells 2026-02-28 799 goto cleanup_free;
a1cee75a302b7a David Howells 2026-02-28 800 if (ret < wreq->len) {
a1cee75a302b7a David Howells 2026-02-28 801 ret = -EIO;
a1cee75a302b7a David Howells 2026-02-28 802 goto cleanup_free;
a1cee75a302b7a David Howells 2026-02-28 803 }
a1cee75a302b7a David Howells 2026-02-28 804
a1cee75a302b7a David Howells 2026-02-28 805 bvecq_pos_attach(&wreq->collect_cursor, &wreq->dispatch_cursor);
a1cee75a302b7a David Howells 2026-02-28 806
2b1424cd131cfa David Howells 2025-05-19 807 __set_bit(NETFS_RREQ_OFFLOAD_COLLECTION, &wreq->flags);
20d72b00ca814d David Howells 2025-05-19 808 trace_netfs_write(wreq, netfs_write_trace_writeback_single);
49866ce7ea8d41 David Howells 2024-12-16 809 netfs_stat(&netfs_n_wh_writepages);
49866ce7ea8d41 David Howells 2024-12-16 810
49866ce7ea8d41 David Howells 2024-12-16 811 if (__test_and_set_bit(NETFS_RREQ_UPLOAD_TO_SERVER, &wreq->flags))
49866ce7ea8d41 David Howells 2024-12-16 812 wreq->netfs_ops->begin_writeback(wreq);
49866ce7ea8d41 David Howells 2024-12-16 813
4e45977f1ea9ab David Howells 2026-01-07 814 for (int s = 0; s < NR_IO_STREAMS; s++) {
4e45977f1ea9ab David Howells 2026-01-07 815 struct netfs_io_subrequest *subreq;
4e45977f1ea9ab David Howells 2026-01-07 816 struct netfs_io_stream *stream = &wreq->io_streams[s];
49866ce7ea8d41 David Howells 2024-12-16 817
4e45977f1ea9ab David Howells 2026-01-07 818 if (!stream->avail)
4e45977f1ea9ab David Howells 2026-01-07 819 continue;
49866ce7ea8d41 David Howells 2024-12-16 820
4e45977f1ea9ab David Howells 2026-01-07 821 netfs_prepare_write(wreq, stream, 0);
4e45977f1ea9ab David Howells 2026-01-07 822
4e45977f1ea9ab David Howells 2026-01-07 823 subreq = stream->construct;
4e45977f1ea9ab David Howells 2026-01-07 824 subreq->len = wreq->len;
4e45977f1ea9ab David Howells 2026-01-07 825 stream->submit_len = subreq->len;
4e45977f1ea9ab David Howells 2026-01-07 826
4e45977f1ea9ab David Howells 2026-01-07 827 netfs_issue_write(wreq, stream);
49866ce7ea8d41 David Howells 2024-12-16 828 }
49866ce7ea8d41 David Howells 2024-12-16 829
a1cee75a302b7a David Howells 2026-02-28 830 wreq->submitted = wreq->len;
49866ce7ea8d41 David Howells 2024-12-16 831 smp_wmb(); /* Write lists before ALL_QUEUED. */
49866ce7ea8d41 David Howells 2024-12-16 832 set_bit(NETFS_RREQ_ALL_QUEUED, &wreq->flags);
49866ce7ea8d41 David Howells 2024-12-16 833
49866ce7ea8d41 David Howells 2024-12-16 834 mutex_unlock(&ictx->wb_lock);
2b1424cd131cfa David Howells 2025-05-19 835 netfs_wake_collector(wreq);
49866ce7ea8d41 David Howells 2024-12-16 836
4e45977f1ea9ab David Howells 2026-01-07 837 /* TODO: Might want to be async here if WB_SYNC_NONE, but then need to
4e45977f1ea9ab David Howells 2026-01-07 838 * wait before modifying.
4e45977f1ea9ab David Howells 2026-01-07 839 */
4e45977f1ea9ab David Howells 2026-01-07 840 ret = netfs_wait_for_write(wreq);
4e45977f1ea9ab David Howells 2026-01-07 841
a1cee75a302b7a David Howells 2026-02-28 842 cleanup_free:
20d72b00ca814d David Howells 2025-05-19 843 netfs_put_request(wreq, netfs_rreq_trace_put_return);
49866ce7ea8d41 David Howells 2024-12-16 844 _leave(" = %d", ret);
49866ce7ea8d41 David Howells 2024-12-16 845 return ret;
49866ce7ea8d41 David Howells 2024-12-16 846
49866ce7ea8d41 David Howells 2024-12-16 847 couldnt_start:
49866ce7ea8d41 David Howells 2024-12-16 848 mutex_unlock(&ictx->wb_lock);
49866ce7ea8d41 David Howells 2024-12-16 849 _leave(" = %d", ret);
49866ce7ea8d41 David Howells 2024-12-16 @850 return ret;
--
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki
^ permalink raw reply [flat|nested] 5+ messages in thread* Re: [dhowells-fs:netfs-next 10/16] fs/netfs/write_issue.c:850 netfs_writeback_single() warn: inconsistent returns '&ictx->wb_lock'.
2026-03-02 9:04 ` Dan Carpenter
(?)
@ 2026-03-02 11:17 ` David Howells
2026-03-02 12:01 ` Dan Carpenter
-1 siblings, 1 reply; 5+ messages in thread
From: David Howells @ 2026-03-02 11:17 UTC (permalink / raw)
To: Dan Carpenter; +Cc: dhowells, oe-kbuild, lkp, oe-kbuild-all
Dan Carpenter <dan.carpenter@linaro.org> wrote:
> if (!mutex_trylock(&ictx->wb_lock)) {
> if (wbc->sync_mode == WB_SYNC_NONE) {
> netfs_stat(&netfs_n_wb_lock_skip);
> return 0;
>
> Unlock before returning?
If mutex_trylock() returns false, we don't have the lock. Note that it's a
"try" so it's not guaranteed to work.
David
^ permalink raw reply [flat|nested] 5+ messages in thread* Re: [dhowells-fs:netfs-next 10/16] fs/netfs/write_issue.c:850 netfs_writeback_single() warn: inconsistent returns '&ictx->wb_lock'.
2026-03-02 11:17 ` David Howells
@ 2026-03-02 12:01 ` Dan Carpenter
2026-03-02 14:47 ` David Howells
0 siblings, 1 reply; 5+ messages in thread
From: Dan Carpenter @ 2026-03-02 12:01 UTC (permalink / raw)
To: David Howells; +Cc: oe-kbuild, lkp, oe-kbuild-all
On Mon, Mar 02, 2026 at 11:17:19AM +0000, David Howells wrote:
> Dan Carpenter <dan.carpenter@linaro.org> wrote:
>
> > if (!mutex_trylock(&ictx->wb_lock)) {
> > if (wbc->sync_mode == WB_SYNC_NONE) {
> > netfs_stat(&netfs_n_wb_lock_skip);
> > return 0;
> >
> > Unlock before returning?
>
> If mutex_trylock() returns false, we don't have the lock. Note that it's a
> "try" so it's not guaranteed to work.
>
Oops. Sorry, that's my bad. It's the cleanup_free which cause a
problem.
49866ce7ea8d41 David Howells 2024-12-16 773 int netfs_writeback_single(struct address_space *mapping,
49866ce7ea8d41 David Howells 2024-12-16 774 struct writeback_control *wbc,
49866ce7ea8d41 David Howells 2024-12-16 775 struct iov_iter *iter)
49866ce7ea8d41 David Howells 2024-12-16 776 {
49866ce7ea8d41 David Howells 2024-12-16 777 struct netfs_io_request *wreq;
49866ce7ea8d41 David Howells 2024-12-16 778 struct netfs_inode *ictx = netfs_inode(mapping->host);
49866ce7ea8d41 David Howells 2024-12-16 779 int ret;
49866ce7ea8d41 David Howells 2024-12-16 780
49866ce7ea8d41 David Howells 2024-12-16 781 if (!mutex_trylock(&ictx->wb_lock)) {
49866ce7ea8d41 David Howells 2024-12-16 782 if (wbc->sync_mode == WB_SYNC_NONE) {
49866ce7ea8d41 David Howells 2024-12-16 783 netfs_stat(&netfs_n_wb_lock_skip);
49866ce7ea8d41 David Howells 2024-12-16 784 return 0;
49866ce7ea8d41 David Howells 2024-12-16 785 }
49866ce7ea8d41 David Howells 2024-12-16 786 netfs_stat(&netfs_n_wb_lock_wait);
49866ce7ea8d41 David Howells 2024-12-16 787 mutex_lock(&ictx->wb_lock);
49866ce7ea8d41 David Howells 2024-12-16 788 }
49866ce7ea8d41 David Howells 2024-12-16 789
49866ce7ea8d41 David Howells 2024-12-16 790 wreq = netfs_create_write_req(mapping, NULL, 0, NETFS_WRITEBACK_SINGLE);
49866ce7ea8d41 David Howells 2024-12-16 791 if (IS_ERR(wreq)) {
49866ce7ea8d41 David Howells 2024-12-16 792 ret = PTR_ERR(wreq);
49866ce7ea8d41 David Howells 2024-12-16 793 goto couldnt_start;
49866ce7ea8d41 David Howells 2024-12-16 794 }
4e45977f1ea9ab David Howells 2026-01-07 795 wreq->len = iov_iter_count(iter);
4e45977f1ea9ab David Howells 2026-01-07 796
a1cee75a302b7a David Howells 2026-02-28 797 ret = netfs_extract_iter(iter, wreq->len, INT_MAX, 0, &wreq->dispatch_cursor.bvecq, 0);
a1cee75a302b7a David Howells 2026-02-28 798 if (ret < 0)
a1cee75a302b7a David Howells 2026-02-28 799 goto cleanup_free;
Holing the lock.
a1cee75a302b7a David Howells 2026-02-28 800 if (ret < wreq->len) {
a1cee75a302b7a David Howells 2026-02-28 801 ret = -EIO;
a1cee75a302b7a David Howells 2026-02-28 802 goto cleanup_free;
And here.
a1cee75a302b7a David Howells 2026-02-28 803 }
a1cee75a302b7a David Howells 2026-02-28 804
a1cee75a302b7a David Howells 2026-02-28 805 bvecq_pos_attach(&wreq->collect_cursor, &wreq->dispatch_cursor);
a1cee75a302b7a David Howells 2026-02-28 806
2b1424cd131cfa David Howells 2025-05-19 807 __set_bit(NETFS_RREQ_OFFLOAD_COLLECTION, &wreq->flags);
20d72b00ca814d David Howells 2025-05-19 808 trace_netfs_write(wreq, netfs_write_trace_writeback_single);
49866ce7ea8d41 David Howells 2024-12-16 809 netfs_stat(&netfs_n_wh_writepages);
49866ce7ea8d41 David Howells 2024-12-16 810
49866ce7ea8d41 David Howells 2024-12-16 811 if (__test_and_set_bit(NETFS_RREQ_UPLOAD_TO_SERVER, &wreq->flags))
49866ce7ea8d41 David Howells 2024-12-16 812 wreq->netfs_ops->begin_writeback(wreq);
49866ce7ea8d41 David Howells 2024-12-16 813
4e45977f1ea9ab David Howells 2026-01-07 814 for (int s = 0; s < NR_IO_STREAMS; s++) {
4e45977f1ea9ab David Howells 2026-01-07 815 struct netfs_io_subrequest *subreq;
4e45977f1ea9ab David Howells 2026-01-07 816 struct netfs_io_stream *stream = &wreq->io_streams[s];
49866ce7ea8d41 David Howells 2024-12-16 817
4e45977f1ea9ab David Howells 2026-01-07 818 if (!stream->avail)
4e45977f1ea9ab David Howells 2026-01-07 819 continue;
49866ce7ea8d41 David Howells 2024-12-16 820
4e45977f1ea9ab David Howells 2026-01-07 821 netfs_prepare_write(wreq, stream, 0);
4e45977f1ea9ab David Howells 2026-01-07 822
4e45977f1ea9ab David Howells 2026-01-07 823 subreq = stream->construct;
4e45977f1ea9ab David Howells 2026-01-07 824 subreq->len = wreq->len;
4e45977f1ea9ab David Howells 2026-01-07 825 stream->submit_len = subreq->len;
4e45977f1ea9ab David Howells 2026-01-07 826
4e45977f1ea9ab David Howells 2026-01-07 827 netfs_issue_write(wreq, stream);
49866ce7ea8d41 David Howells 2024-12-16 828 }
49866ce7ea8d41 David Howells 2024-12-16 829
a1cee75a302b7a David Howells 2026-02-28 830 wreq->submitted = wreq->len;
49866ce7ea8d41 David Howells 2024-12-16 831 smp_wmb(); /* Write lists before ALL_QUEUED. */
49866ce7ea8d41 David Howells 2024-12-16 832 set_bit(NETFS_RREQ_ALL_QUEUED, &wreq->flags);
49866ce7ea8d41 David Howells 2024-12-16 833
49866ce7ea8d41 David Howells 2024-12-16 834 mutex_unlock(&ictx->wb_lock);
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Drops the lock here.
2b1424cd131cfa David Howells 2025-05-19 835 netfs_wake_collector(wreq);
49866ce7ea8d41 David Howells 2024-12-16 836
4e45977f1ea9ab David Howells 2026-01-07 837 /* TODO: Might want to be async here if WB_SYNC_NONE, but then need to
4e45977f1ea9ab David Howells 2026-01-07 838 * wait before modifying.
4e45977f1ea9ab David Howells 2026-01-07 839 */
4e45977f1ea9ab David Howells 2026-01-07 840 ret = netfs_wait_for_write(wreq);
4e45977f1ea9ab David Howells 2026-01-07 841
a1cee75a302b7a David Howells 2026-02-28 842 cleanup_free:
But the goto cleanup_free gotos are still holding the lock.
regards,
dan carpenter
20d72b00ca814d David Howells 2025-05-19 843 netfs_put_request(wreq, netfs_rreq_trace_put_return);
49866ce7ea8d41 David Howells 2024-12-16 844 _leave(" = %d", ret);
49866ce7ea8d41 David Howells 2024-12-16 845 return ret;
49866ce7ea8d41 David Howells 2024-12-16 846
49866ce7ea8d41 David Howells 2024-12-16 847 couldnt_start:
49866ce7ea8d41 David Howells 2024-12-16 848 mutex_unlock(&ictx->wb_lock);
49866ce7ea8d41 David Howells 2024-12-16 849 _leave(" = %d", ret);
49866ce7ea8d41 David Howells 2024-12-16 @850 return ret;
^ permalink raw reply [flat|nested] 5+ messages in thread* Re: [dhowells-fs:netfs-next 10/16] fs/netfs/write_issue.c:850 netfs_writeback_single() warn: inconsistent returns '&ictx->wb_lock'.
2026-03-02 12:01 ` Dan Carpenter
@ 2026-03-02 14:47 ` David Howells
0 siblings, 0 replies; 5+ messages in thread
From: David Howells @ 2026-03-02 14:47 UTC (permalink / raw)
To: Dan Carpenter; +Cc: dhowells, oe-kbuild, lkp, oe-kbuild-all
Dan Carpenter <dan.carpenter@linaro.org> wrote:
> a1cee75a302b7a David Howells 2026-02-28 842 cleanup_free:
>
> But the goto cleanup_free gotos are still holding the lock.
I see it now, thanks. I'm going to move cleanup_free: and change the end of
the function:
--- a/fs/netfs/write_issue.c
+++ b/fs/netfs/write_issue.c
@@ -839,11 +839,12 @@ int netfs_writeback_single(struct address_space *mapping,
*/
ret = netfs_wait_for_write(wreq);
-cleanup_free:
netfs_put_request(wreq, netfs_rreq_trace_put_return);
_leave(" = %d", ret);
return ret;
+cleanup_free:
+ netfs_put_request(wreq, netfs_rreq_trace_put_return);
couldnt_start:
mutex_unlock(&ictx->wb_lock);
_leave(" = %d", ret);
David
^ permalink raw reply [flat|nested] 5+ messages in thread
end of thread, other threads:[~2026-03-02 14:47 UTC | newest]
Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-03-01 4:48 [dhowells-fs:netfs-next 10/16] fs/netfs/write_issue.c:850 netfs_writeback_single() warn: inconsistent returns '&ictx->wb_lock' kernel test robot
2026-03-02 9:04 ` Dan Carpenter
2026-03-02 11:17 ` David Howells
2026-03-02 12:01 ` Dan Carpenter
2026-03-02 14:47 ` David Howells
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.