From: "Michael S. Tsirkin" <mst@redhat.com>
To: Li Chen <me@linux.beauty>
Cc: Pankaj Gupta <pankaj.gupta.linux@gmail.com>,
Dan Williams <dan.j.williams@intel.com>,
Vishal Verma <vishal.l.verma@intel.com>,
Dave Jiang <dave.jiang@intel.com>,
Ira Weiny <ira.weiny@intel.com>,
Cornelia Huck <cohuck@redhat.com>,
Yuval Shaia <yuval.shaia@oracle.com>,
virtualization@lists.linux.dev, nvdimm@lists.linux.dev,
linux-kernel@vger.kernel.org
Subject: Re: [PATCH v2] nvdimm: virtio_pmem: serialize flush requests
Date: Tue, 3 Feb 2026 05:27:12 -0500 [thread overview]
Message-ID: <20260203052616-mutt-send-email-mst@kernel.org> (raw)
In-Reply-To: <20260203021353.121091-1-me@linux.beauty>
On Tue, Feb 03, 2026 at 10:13:51AM +0800, Li Chen wrote:
> Under heavy concurrent flush traffic, virtio-pmem can overflow its request
> virtqueue (req_vq): virtqueue_add_sgs() starts returning -ENOSPC and the
> driver logs "no free slots in the virtqueue". Shortly after that the
> device enters VIRTIO_CONFIG_S_NEEDS_RESET and flush requests fail with
> "virtio pmem device needs a reset".
>
> Serialize virtio_pmem_flush() with a per-device mutex so only one flush
> request is in-flight at a time. This prevents req_vq descriptor overflow
> under high concurrency.
>
> Reproducer (guest with virtio-pmem):
> - mkfs.ext4 -F /dev/pmem0
> - mount -t ext4 -o dax,noatime /dev/pmem0 /mnt/bench
> - fio: ioengine=io_uring rw=randwrite bs=4k iodepth=64 numjobs=64
> direct=1 fsync=1 runtime=30s time_based=1
> - dmesg: "no free slots in the virtqueue"
> "virtio pmem device needs a reset"
>
> Fixes: 6e84200c0a29 ("virtio-pmem: Add virtio pmem driver")
> Signed-off-by: Li Chen <me@linux.beauty>
Thanks!
And the commit message looks good now and includes the
reproducer.
Acked-by: Michael S. Tsirkin <mst@redhat.com>
Ira are you picking this up?
> ---
> v2:
> - Use guard(mutex)() for flush_lock (as suggested by Ira Weiny).
> - Drop redundant might_sleep() next to guard(mutex)() (as suggested by Michael S. Tsirkin).
>
> drivers/nvdimm/nd_virtio.c | 3 ++-
> drivers/nvdimm/virtio_pmem.c | 1 +
> drivers/nvdimm/virtio_pmem.h | 4 ++++
> 3 files changed, 7 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/nvdimm/nd_virtio.c b/drivers/nvdimm/nd_virtio.c
> index c3f07be4aa22..af82385be7c6 100644
> --- a/drivers/nvdimm/nd_virtio.c
> +++ b/drivers/nvdimm/nd_virtio.c
> @@ -44,6 +44,8 @@ static int virtio_pmem_flush(struct nd_region *nd_region)
> unsigned long flags;
> int err, err1;
>
> + guard(mutex)(&vpmem->flush_lock);
> +
> /*
> * Don't bother to submit the request to the device if the device is
> * not activated.
> @@ -53,7 +55,6 @@ static int virtio_pmem_flush(struct nd_region *nd_region)
> return -EIO;
> }
>
> - might_sleep();
> req_data = kmalloc(sizeof(*req_data), GFP_KERNEL);
> if (!req_data)
> return -ENOMEM;
> diff --git a/drivers/nvdimm/virtio_pmem.c b/drivers/nvdimm/virtio_pmem.c
> index 2396d19ce549..77b196661905 100644
> --- a/drivers/nvdimm/virtio_pmem.c
> +++ b/drivers/nvdimm/virtio_pmem.c
> @@ -64,6 +64,7 @@ static int virtio_pmem_probe(struct virtio_device *vdev)
> goto out_err;
> }
>
> + mutex_init(&vpmem->flush_lock);
> vpmem->vdev = vdev;
> vdev->priv = vpmem;
> err = init_vq(vpmem);
> diff --git a/drivers/nvdimm/virtio_pmem.h b/drivers/nvdimm/virtio_pmem.h
> index 0dddefe594c4..f72cf17f9518 100644
> --- a/drivers/nvdimm/virtio_pmem.h
> +++ b/drivers/nvdimm/virtio_pmem.h
> @@ -13,6 +13,7 @@
> #include <linux/module.h>
> #include <uapi/linux/virtio_pmem.h>
> #include <linux/libnvdimm.h>
> +#include <linux/mutex.h>
> #include <linux/spinlock.h>
>
> struct virtio_pmem_request {
> @@ -35,6 +36,9 @@ struct virtio_pmem {
> /* Virtio pmem request queue */
> struct virtqueue *req_vq;
>
> + /* Serialize flush requests to the device. */
> + struct mutex flush_lock;
> +
> /* nvdimm bus registers virtio pmem device */
> struct nvdimm_bus *nvdimm_bus;
> struct nvdimm_bus_descriptor nd_desc;
> --
> 2.52.0
next prev parent reply other threads:[~2026-02-03 10:27 UTC|newest]
Thread overview: 4+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-02-03 2:13 [PATCH v2] nvdimm: virtio_pmem: serialize flush requests Li Chen
2026-02-03 10:27 ` Michael S. Tsirkin [this message]
2026-02-03 20:41 ` Ira Weiny
2026-02-03 10:43 ` Pankaj Gupta
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20260203052616-mutt-send-email-mst@kernel.org \
--to=mst@redhat.com \
--cc=cohuck@redhat.com \
--cc=dan.j.williams@intel.com \
--cc=dave.jiang@intel.com \
--cc=ira.weiny@intel.com \
--cc=linux-kernel@vger.kernel.org \
--cc=me@linux.beauty \
--cc=nvdimm@lists.linux.dev \
--cc=pankaj.gupta.linux@gmail.com \
--cc=virtualization@lists.linux.dev \
--cc=vishal.l.verma@intel.com \
--cc=yuval.shaia@oracle.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox