From: Greg Kurz <groug@kaod.org>
To: Christian Schoenebeck <qemu_oss@crudebyte.com>
Cc: <qemu-devel@nongnu.org>, <qemu-stable@nongnu.org>
Subject: Re: [PATCH 1/2] 9pfs: fix concurrent v9fs_reclaim_fd() calls
Date: Thu, 6 Mar 2025 08:38:58 +0100 [thread overview]
Message-ID: <20250306083858.743ed47c@bahia> (raw)
In-Reply-To: <3429da65ff753b47654b7ae26607417c571a7cb1.1741101468.git.qemu_oss@crudebyte.com>
Hi Christian !
On Tue, 4 Mar 2025 16:15:57 +0100
Christian Schoenebeck <qemu_oss@crudebyte.com> wrote:
> Even though this function is serialized to be always called from main
> thread, v9fs_reclaim_fd() is dispatching the coroutine to a worker thread
> in between via its v9fs_co_*() calls, hence leading to the situation where
> v9fs_reclaim_fd() is effectively executed multiple times simultaniously,
> which renders its LRU algorithm useless and causes high latency.
>
> Fix this by adding a simple boolean variable to ensure this function is
> only called once at a time. No synchronization needed for this boolean
> variable as this function is only entered and returned on main thread.
>
> Fixes: 7a46274529c ('hw/9pfs: Add file descriptor reclaim support')
> Signed-off-by: Christian Schoenebeck <qemu_oss@crudebyte.com>
> ---
Another long long standing bug bites the dust ! Good catch !
Reviewed-by: Greg Kurz <groug@kaod.org>
> hw/9pfs/9p.c | 10 ++++++++++
> hw/9pfs/9p.h | 1 +
> 2 files changed, 11 insertions(+)
>
> diff --git a/hw/9pfs/9p.c b/hw/9pfs/9p.c
> index 7cad2bce62..4f9c2dde9c 100644
> --- a/hw/9pfs/9p.c
> +++ b/hw/9pfs/9p.c
> @@ -435,6 +435,12 @@ void coroutine_fn v9fs_reclaim_fd(V9fsPDU *pdu)
> GHashTableIter iter;
> gpointer fid;
>
> + /* prevent multiple coroutines running this function simultaniously */
> + if (s->reclaiming) {
> + return;
> + }
> + s->reclaiming = true;
> +
> g_hash_table_iter_init(&iter, s->fids);
>
> QSLIST_HEAD(, V9fsFidState) reclaim_list =
> @@ -510,6 +516,8 @@ void coroutine_fn v9fs_reclaim_fd(V9fsPDU *pdu)
> */
> put_fid(pdu, f);
> }
> +
> + s->reclaiming = false;
> }
>
> /*
> @@ -4324,6 +4332,8 @@ int v9fs_device_realize_common(V9fsState *s, const V9fsTransport *t,
> s->ctx.fst = &fse->fst;
> fsdev_throttle_init(s->ctx.fst);
>
> + s->reclaiming = false;
> +
> rc = 0;
> out:
> if (rc) {
> diff --git a/hw/9pfs/9p.h b/hw/9pfs/9p.h
> index 5e041e1f60..259ad32ed1 100644
> --- a/hw/9pfs/9p.h
> +++ b/hw/9pfs/9p.h
> @@ -362,6 +362,7 @@ struct V9fsState {
> uint64_t qp_ndevices; /* Amount of entries in qpd_table. */
> uint16_t qp_affix_next;
> uint64_t qp_fullpath_next;
> + bool reclaiming;
> };
>
> /* 9p2000.L open flags */
--
Greg
next prev parent reply other threads:[~2025-03-06 7:40 UTC|newest]
Thread overview: 6+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-03-04 15:17 [PATCH 0/2] 9pfs: v9fs_reclaim_fd() fixes Christian Schoenebeck
2025-03-04 15:15 ` [PATCH 1/2] 9pfs: fix concurrent v9fs_reclaim_fd() calls Christian Schoenebeck
2025-03-06 7:38 ` Greg Kurz [this message]
2025-03-04 15:16 ` [PATCH 2/2] 9pfs: reduce latency of v9fs_reclaim_fd() Christian Schoenebeck
2025-03-06 9:07 ` Greg Kurz
2025-03-06 15:11 ` Christian Schoenebeck
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20250306083858.743ed47c@bahia \
--to=groug@kaod.org \
--cc=qemu-devel@nongnu.org \
--cc=qemu-stable@nongnu.org \
--cc=qemu_oss@crudebyte.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).