From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id AA46236C9C8 for ; Sat, 28 Feb 2026 18:10:02 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1772302202; cv=none; b=kPjwcS3ds2/9Z2GsDrLE2XjcUMD21z+WEPFMRio5TLJ2WwBtg+ptJQ03RaqaoTU2Q2U25KlYB6c6ar7xgYo2O1PvXoGjR7xxE6mdRR0Phc/RZrNqGBgADdlbYpxMkZRDIVYlO5xIUmkU3v5X/MzYvjNp9+ZWljURySXlRUkJ2Js= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1772302202; c=relaxed/simple; bh=1OnCN+m43x2VairLUeQMEXCmyhDouVaorresiCXbQzg=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=Y/+lFKC0fQhvZtHKKKSaYO81EG/iB9dg9uiI/LsZhBTEEnffQmREj4l9KOkAQo/Rf71j3nJ4ck65IGArHbFaxabKq3mBKwzOtywG4bQhJzeRm4WXuyRmCpq/0zCif+aSWJ00QC+t7E2lJm5eLkKHRg6KpUp6MtxrlCrrVlBFvs8= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=l4uNUBcX; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="l4uNUBcX" Received: by smtp.kernel.org (Postfix) with ESMTPSA id E4224C19424; Sat, 28 Feb 2026 18:10:01 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1772302202; bh=1OnCN+m43x2VairLUeQMEXCmyhDouVaorresiCXbQzg=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=l4uNUBcX6Fqh/+Yi0izFuj992P2mQK1sQIWnh6y+6/NC4gUZcQzcvZ1ypHFwGDP8o sBt/4Fs55fLvV7528TylJ/5IGO6HkS+i7OH1DnJEvJ27auxSKS9MytiEmujd4elFPj sEubEKvjuS37J8B0wtZYtFwPOh5BDpTWdm0+H2POCUBOWp84NLvLIBzM9g8hrEuin6 kRIwmAmLZlDUTAodnt8gSIHULpoZnJPi1VKSSUaaK1XzCyFEnhumTf/FVhNTdDn02Y fLoP8Igpnaj58Zg6eXa0dT16Libu/HeWXQlwOuhtMCKcCI8OtHaElULkWDjkh1o5et qHUoOW31Uy5QQ== From: Sasha Levin To: patches@lists.linux.dev Cc: Li Chen , Pankaj Gupta , "Michael S. Tsirkin" , Ira Weiny , Sasha Levin Subject: [PATCH 6.6 207/283] nvdimm: virtio_pmem: serialize flush requests Date: Sat, 28 Feb 2026 13:05:49 -0500 Message-ID: <20260228180709.1583486-207-sashal@kernel.org> X-Mailer: git-send-email 2.51.0 In-Reply-To: <20260228180709.1583486-1-sashal@kernel.org> References: <20260228180709.1583486-1-sashal@kernel.org> Precedence: bulk X-Mailing-List: patches@lists.linux.dev List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-stable: review X-Patchwork-Hint: Ignore Content-Transfer-Encoding: 8bit From: Li Chen [ Upstream commit a9ba6733c7f1096c4506bf4e34a546e07242df74 ] Under heavy concurrent flush traffic, virtio-pmem can overflow its request virtqueue (req_vq): virtqueue_add_sgs() starts returning -ENOSPC and the driver logs "no free slots in the virtqueue". Shortly after that the device enters VIRTIO_CONFIG_S_NEEDS_RESET and flush requests fail with "virtio pmem device needs a reset". Serialize virtio_pmem_flush() with a per-device mutex so only one flush request is in-flight at a time. This prevents req_vq descriptor overflow under high concurrency. Reproducer (guest with virtio-pmem): - mkfs.ext4 -F /dev/pmem0 - mount -t ext4 -o dax,noatime /dev/pmem0 /mnt/bench - fio: ioengine=io_uring rw=randwrite bs=4k iodepth=64 numjobs=64 direct=1 fsync=1 runtime=30s time_based=1 - dmesg: "no free slots in the virtqueue" "virtio pmem device needs a reset" Fixes: 6e84200c0a29 ("virtio-pmem: Add virtio pmem driver") Signed-off-by: Li Chen Acked-by: Pankaj Gupta Acked-by: Michael S. Tsirkin Link: https://patch.msgid.link/20260203021353.121091-1-me@linux.beauty Signed-off-by: Ira Weiny Signed-off-by: Sasha Levin --- drivers/nvdimm/nd_virtio.c | 3 ++- drivers/nvdimm/virtio_pmem.c | 1 + drivers/nvdimm/virtio_pmem.h | 4 ++++ 3 files changed, 7 insertions(+), 1 deletion(-) diff --git a/drivers/nvdimm/nd_virtio.c b/drivers/nvdimm/nd_virtio.c index 839f10ca56eac..e5a7b031da2d6 100644 --- a/drivers/nvdimm/nd_virtio.c +++ b/drivers/nvdimm/nd_virtio.c @@ -44,6 +44,8 @@ static int virtio_pmem_flush(struct nd_region *nd_region) unsigned long flags; int err, err1; + guard(mutex)(&vpmem->flush_lock); + /* * Don't bother to submit the request to the device if the device is * not activated. @@ -53,7 +55,6 @@ static int virtio_pmem_flush(struct nd_region *nd_region) return -EIO; } - might_sleep(); req_data = kmalloc(sizeof(*req_data), GFP_KERNEL); if (!req_data) return -ENOMEM; diff --git a/drivers/nvdimm/virtio_pmem.c b/drivers/nvdimm/virtio_pmem.c index a92eb172f0e7e..4eebb2ec3cf97 100644 --- a/drivers/nvdimm/virtio_pmem.c +++ b/drivers/nvdimm/virtio_pmem.c @@ -49,6 +49,7 @@ static int virtio_pmem_probe(struct virtio_device *vdev) goto out_err; } + mutex_init(&vpmem->flush_lock); vpmem->vdev = vdev; vdev->priv = vpmem; err = init_vq(vpmem); diff --git a/drivers/nvdimm/virtio_pmem.h b/drivers/nvdimm/virtio_pmem.h index 0dddefe594c46..f72cf17f9518f 100644 --- a/drivers/nvdimm/virtio_pmem.h +++ b/drivers/nvdimm/virtio_pmem.h @@ -13,6 +13,7 @@ #include #include #include +#include #include struct virtio_pmem_request { @@ -35,6 +36,9 @@ struct virtio_pmem { /* Virtio pmem request queue */ struct virtqueue *req_vq; + /* Serialize flush requests to the device. */ + struct mutex flush_lock; + /* nvdimm bus registers virtio pmem device */ struct nvdimm_bus *nvdimm_bus; struct nvdimm_bus_descriptor nd_desc; -- 2.51.0