From: Dan Carpenter <dan.carpenter@linaro.org>
To: David Howells <dhowells@redhat.com>
Cc: oe-kbuild@lists.linux.dev, lkp@intel.com, oe-kbuild-all@lists.linux.dev
Subject: Re: [dhowells-fs:netfs-next 10/16] fs/netfs/write_issue.c:850 netfs_writeback_single() warn: inconsistent returns '&ictx->wb_lock'.
Date: Mon, 2 Mar 2026 15:01:19 +0300 [thread overview]
Message-ID: <aaV8D4RTkIrcn6jd@stanley.mountain> (raw)
In-Reply-To: <1276684.1772450239@warthog.procyon.org.uk>
On Mon, Mar 02, 2026 at 11:17:19AM +0000, David Howells wrote:
> Dan Carpenter <dan.carpenter@linaro.org> wrote:
>
> > if (!mutex_trylock(&ictx->wb_lock)) {
> > if (wbc->sync_mode == WB_SYNC_NONE) {
> > netfs_stat(&netfs_n_wb_lock_skip);
> > return 0;
> >
> > Unlock before returning?
>
> If mutex_trylock() returns false, we don't have the lock. Note that it's a
> "try" so it's not guaranteed to work.
>
Oops. Sorry, that's my bad. It's the cleanup_free which cause a
problem.
49866ce7ea8d41 David Howells 2024-12-16 773 int netfs_writeback_single(struct address_space *mapping,
49866ce7ea8d41 David Howells 2024-12-16 774 struct writeback_control *wbc,
49866ce7ea8d41 David Howells 2024-12-16 775 struct iov_iter *iter)
49866ce7ea8d41 David Howells 2024-12-16 776 {
49866ce7ea8d41 David Howells 2024-12-16 777 struct netfs_io_request *wreq;
49866ce7ea8d41 David Howells 2024-12-16 778 struct netfs_inode *ictx = netfs_inode(mapping->host);
49866ce7ea8d41 David Howells 2024-12-16 779 int ret;
49866ce7ea8d41 David Howells 2024-12-16 780
49866ce7ea8d41 David Howells 2024-12-16 781 if (!mutex_trylock(&ictx->wb_lock)) {
49866ce7ea8d41 David Howells 2024-12-16 782 if (wbc->sync_mode == WB_SYNC_NONE) {
49866ce7ea8d41 David Howells 2024-12-16 783 netfs_stat(&netfs_n_wb_lock_skip);
49866ce7ea8d41 David Howells 2024-12-16 784 return 0;
49866ce7ea8d41 David Howells 2024-12-16 785 }
49866ce7ea8d41 David Howells 2024-12-16 786 netfs_stat(&netfs_n_wb_lock_wait);
49866ce7ea8d41 David Howells 2024-12-16 787 mutex_lock(&ictx->wb_lock);
49866ce7ea8d41 David Howells 2024-12-16 788 }
49866ce7ea8d41 David Howells 2024-12-16 789
49866ce7ea8d41 David Howells 2024-12-16 790 wreq = netfs_create_write_req(mapping, NULL, 0, NETFS_WRITEBACK_SINGLE);
49866ce7ea8d41 David Howells 2024-12-16 791 if (IS_ERR(wreq)) {
49866ce7ea8d41 David Howells 2024-12-16 792 ret = PTR_ERR(wreq);
49866ce7ea8d41 David Howells 2024-12-16 793 goto couldnt_start;
49866ce7ea8d41 David Howells 2024-12-16 794 }
4e45977f1ea9ab David Howells 2026-01-07 795 wreq->len = iov_iter_count(iter);
4e45977f1ea9ab David Howells 2026-01-07 796
a1cee75a302b7a David Howells 2026-02-28 797 ret = netfs_extract_iter(iter, wreq->len, INT_MAX, 0, &wreq->dispatch_cursor.bvecq, 0);
a1cee75a302b7a David Howells 2026-02-28 798 if (ret < 0)
a1cee75a302b7a David Howells 2026-02-28 799 goto cleanup_free;
Holing the lock.
a1cee75a302b7a David Howells 2026-02-28 800 if (ret < wreq->len) {
a1cee75a302b7a David Howells 2026-02-28 801 ret = -EIO;
a1cee75a302b7a David Howells 2026-02-28 802 goto cleanup_free;
And here.
a1cee75a302b7a David Howells 2026-02-28 803 }
a1cee75a302b7a David Howells 2026-02-28 804
a1cee75a302b7a David Howells 2026-02-28 805 bvecq_pos_attach(&wreq->collect_cursor, &wreq->dispatch_cursor);
a1cee75a302b7a David Howells 2026-02-28 806
2b1424cd131cfa David Howells 2025-05-19 807 __set_bit(NETFS_RREQ_OFFLOAD_COLLECTION, &wreq->flags);
20d72b00ca814d David Howells 2025-05-19 808 trace_netfs_write(wreq, netfs_write_trace_writeback_single);
49866ce7ea8d41 David Howells 2024-12-16 809 netfs_stat(&netfs_n_wh_writepages);
49866ce7ea8d41 David Howells 2024-12-16 810
49866ce7ea8d41 David Howells 2024-12-16 811 if (__test_and_set_bit(NETFS_RREQ_UPLOAD_TO_SERVER, &wreq->flags))
49866ce7ea8d41 David Howells 2024-12-16 812 wreq->netfs_ops->begin_writeback(wreq);
49866ce7ea8d41 David Howells 2024-12-16 813
4e45977f1ea9ab David Howells 2026-01-07 814 for (int s = 0; s < NR_IO_STREAMS; s++) {
4e45977f1ea9ab David Howells 2026-01-07 815 struct netfs_io_subrequest *subreq;
4e45977f1ea9ab David Howells 2026-01-07 816 struct netfs_io_stream *stream = &wreq->io_streams[s];
49866ce7ea8d41 David Howells 2024-12-16 817
4e45977f1ea9ab David Howells 2026-01-07 818 if (!stream->avail)
4e45977f1ea9ab David Howells 2026-01-07 819 continue;
49866ce7ea8d41 David Howells 2024-12-16 820
4e45977f1ea9ab David Howells 2026-01-07 821 netfs_prepare_write(wreq, stream, 0);
4e45977f1ea9ab David Howells 2026-01-07 822
4e45977f1ea9ab David Howells 2026-01-07 823 subreq = stream->construct;
4e45977f1ea9ab David Howells 2026-01-07 824 subreq->len = wreq->len;
4e45977f1ea9ab David Howells 2026-01-07 825 stream->submit_len = subreq->len;
4e45977f1ea9ab David Howells 2026-01-07 826
4e45977f1ea9ab David Howells 2026-01-07 827 netfs_issue_write(wreq, stream);
49866ce7ea8d41 David Howells 2024-12-16 828 }
49866ce7ea8d41 David Howells 2024-12-16 829
a1cee75a302b7a David Howells 2026-02-28 830 wreq->submitted = wreq->len;
49866ce7ea8d41 David Howells 2024-12-16 831 smp_wmb(); /* Write lists before ALL_QUEUED. */
49866ce7ea8d41 David Howells 2024-12-16 832 set_bit(NETFS_RREQ_ALL_QUEUED, &wreq->flags);
49866ce7ea8d41 David Howells 2024-12-16 833
49866ce7ea8d41 David Howells 2024-12-16 834 mutex_unlock(&ictx->wb_lock);
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Drops the lock here.
2b1424cd131cfa David Howells 2025-05-19 835 netfs_wake_collector(wreq);
49866ce7ea8d41 David Howells 2024-12-16 836
4e45977f1ea9ab David Howells 2026-01-07 837 /* TODO: Might want to be async here if WB_SYNC_NONE, but then need to
4e45977f1ea9ab David Howells 2026-01-07 838 * wait before modifying.
4e45977f1ea9ab David Howells 2026-01-07 839 */
4e45977f1ea9ab David Howells 2026-01-07 840 ret = netfs_wait_for_write(wreq);
4e45977f1ea9ab David Howells 2026-01-07 841
a1cee75a302b7a David Howells 2026-02-28 842 cleanup_free:
But the goto cleanup_free gotos are still holding the lock.
regards,
dan carpenter
20d72b00ca814d David Howells 2025-05-19 843 netfs_put_request(wreq, netfs_rreq_trace_put_return);
49866ce7ea8d41 David Howells 2024-12-16 844 _leave(" = %d", ret);
49866ce7ea8d41 David Howells 2024-12-16 845 return ret;
49866ce7ea8d41 David Howells 2024-12-16 846
49866ce7ea8d41 David Howells 2024-12-16 847 couldnt_start:
49866ce7ea8d41 David Howells 2024-12-16 848 mutex_unlock(&ictx->wb_lock);
49866ce7ea8d41 David Howells 2024-12-16 849 _leave(" = %d", ret);
49866ce7ea8d41 David Howells 2024-12-16 @850 return ret;
next prev parent reply other threads:[~2026-03-02 12:01 UTC|newest]
Thread overview: 5+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-03-01 4:48 [dhowells-fs:netfs-next 10/16] fs/netfs/write_issue.c:850 netfs_writeback_single() warn: inconsistent returns '&ictx->wb_lock' kernel test robot
2026-03-02 9:04 ` Dan Carpenter
2026-03-02 11:17 ` David Howells
2026-03-02 12:01 ` Dan Carpenter [this message]
2026-03-02 14:47 ` David Howells
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=aaV8D4RTkIrcn6jd@stanley.mountain \
--to=dan.carpenter@linaro.org \
--cc=dhowells@redhat.com \
--cc=lkp@intel.com \
--cc=oe-kbuild-all@lists.linux.dev \
--cc=oe-kbuild@lists.linux.dev \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.