public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/4] vfio/virtio: Fix list_lock type and modernize locking
@ 2026-04-14 20:06 Alex Williamson
  2026-04-14 20:06 ` [PATCH 1/4] vfio/virtio: Convert list_lock from spinlock to mutex Alex Williamson
                   ` (4 more replies)
  0 siblings, 5 replies; 8+ messages in thread
From: Alex Williamson @ 2026-04-14 20:06 UTC (permalink / raw)
  To: alex
  Cc: Alex Williamson, yishaih, kvm, guojinhui.liam, linux-kernel,
	virtualization

Jinhui Guo reported a mismatched spin_lock()/spin_unlock_irq() pair
in virtiovf_read_device_context_chunk() where spin_unlock_irq() would
unconditionally enable interrupts despite spin_lock() never having
disabled them.  On closer inspection, the list_lock spinlock with IRQ
disabling was copied from the mlx5 variant driver where a hardirq
completion callback justifies it, but the virtio driver has no
interrupt context usage of list_lock.  Patch 1 converts list_lock to
a mutex, fixing the mismatch and aligning with peer vfio-pci variant
drivers.                                                                                                                                                    
Patch 2 converts the list_lock acquisitions to guard()/scoped_guard()
where the lock scope aligns naturally with function or block scope.

Patches 3 and 4 extend the same guard() conversion to the remaining
two mutexes in the driver (migf->lock and bar_mutex).  These are
relatively independent of the list_lock fix but complete the
conversion across the driver.  Thanks,

Alex

Alex Williamson (4):
  vfio/virtio: Convert list_lock from spinlock to mutex
  vfio/virtio: Use guard() for list_lock where applicable
  vfio/virtio: Use guard() for migf->lock where applicable
  vfio/virtio: Use guard() for bar_mutex in legacy I/O

 drivers/vfio/pci/virtio/common.h    |  2 +-
 drivers/vfio/pci/virtio/legacy_io.c | 17 +++---
 drivers/vfio/pci/virtio/migrate.c   | 90 ++++++++++++-----------------
 3 files changed, 46 insertions(+), 63 deletions(-)

-- 
2.51.0


^ permalink raw reply	[flat|nested] 8+ messages in thread

* [PATCH 1/4] vfio/virtio: Convert list_lock from spinlock to mutex
  2026-04-14 20:06 [PATCH 0/4] vfio/virtio: Fix list_lock type and modernize locking Alex Williamson
@ 2026-04-14 20:06 ` Alex Williamson
  2026-04-14 20:06 ` [PATCH 2/4] vfio/virtio: Use guard() for list_lock where applicable Alex Williamson
                   ` (3 subsequent siblings)
  4 siblings, 0 replies; 8+ messages in thread
From: Alex Williamson @ 2026-04-14 20:06 UTC (permalink / raw)
  To: alex
  Cc: Alex Williamson, yishaih, kvm, guojinhui.liam, linux-kernel,
	virtualization, stable

The list_lock spinlock with IRQ disabling was copied from the mlx5
vfio-pci variant driver, where it is justified by a hardirq async
command completion callback that accesses the protected lists.  The
virtio driver has no such interrupt context usage; all list_lock
acquisitions occur in process context via file read/write operations
or state transitions under state_mutex.

Convert list_lock to a mutex to be consistent with peer vfio-pci
variant drivers (hisilicon, pds, qat, xe) which all use mutexes for
equivalent migration data protection.  This also fixes a mismatched
spin_lock()/spin_unlock_irq() pair in virtiovf_read_device_context_chunk()
that could incorrectly enable interrupts.

Reported-by: Jinhui Guo <guojinhui.liam@bytedance.com>
Closes: https://lore.kernel.org/all/20260413073603.30538-1-guojinhui.liam@bytedance.com
Fixes: 0bbc82e4ec79 ("vfio/virtio: Add support for the basic live migration functionality")
Cc: stable@vger.kernel.org
Assisted-by: Claude:claude-opus-4-6
Signed-off-by: Alex Williamson <alex.williamson@nvidia.com>
---
 drivers/vfio/pci/virtio/common.h  |  2 +-
 drivers/vfio/pci/virtio/migrate.c | 33 ++++++++++++++++---------------
 2 files changed, 18 insertions(+), 17 deletions(-)

diff --git a/drivers/vfio/pci/virtio/common.h b/drivers/vfio/pci/virtio/common.h
index cb3d5e57d3a3..3ccbd49e6abe 100644
--- a/drivers/vfio/pci/virtio/common.h
+++ b/drivers/vfio/pci/virtio/common.h
@@ -68,7 +68,7 @@ struct virtiovf_migration_file {
 	enum virtiovf_migf_state state;
 	enum virtiovf_load_state load_state;
 	/* synchronize access to the lists */
-	spinlock_t list_lock;
+	struct mutex list_lock;
 	struct list_head buf_list;
 	struct list_head avail_list;
 	struct virtiovf_data_buffer *buf;
diff --git a/drivers/vfio/pci/virtio/migrate.c b/drivers/vfio/pci/virtio/migrate.c
index 35fa2d6ed611..15fcd936528b 100644
--- a/drivers/vfio/pci/virtio/migrate.c
+++ b/drivers/vfio/pci/virtio/migrate.c
@@ -142,9 +142,9 @@ virtiovf_alloc_data_buffer(struct virtiovf_migration_file *migf, size_t length)
 
 static void virtiovf_put_data_buffer(struct virtiovf_data_buffer *buf)
 {
-	spin_lock_irq(&buf->migf->list_lock);
+	mutex_lock(&buf->migf->list_lock);
 	list_add_tail(&buf->buf_elm, &buf->migf->avail_list);
-	spin_unlock_irq(&buf->migf->list_lock);
+	mutex_unlock(&buf->migf->list_lock);
 }
 
 static int
@@ -170,21 +170,21 @@ virtiovf_get_data_buffer(struct virtiovf_migration_file *migf, size_t length)
 
 	INIT_LIST_HEAD(&free_list);
 
-	spin_lock_irq(&migf->list_lock);
+	mutex_lock(&migf->list_lock);
 	list_for_each_entry_safe(buf, temp_buf, &migf->avail_list, buf_elm) {
 		list_del_init(&buf->buf_elm);
 		if (buf->allocated_length >= length) {
-			spin_unlock_irq(&migf->list_lock);
+			mutex_unlock(&migf->list_lock);
 			goto found;
 		}
 		/*
 		 * Prevent holding redundant buffers. Put in a free
-		 * list and call at the end not under the spin lock
+		 * list and call at the end not under the mutex
 		 * (&migf->list_lock) to minimize its scope usage.
 		 */
 		list_add(&buf->buf_elm, &free_list);
 	}
-	spin_unlock_irq(&migf->list_lock);
+	mutex_unlock(&migf->list_lock);
 	buf = virtiovf_alloc_data_buffer(migf, length);
 
 found:
@@ -295,6 +295,7 @@ static int virtiovf_release_file(struct inode *inode, struct file *filp)
 	struct virtiovf_migration_file *migf = filp->private_data;
 
 	virtiovf_disable_fd(migf);
+	mutex_destroy(&migf->list_lock);
 	mutex_destroy(&migf->lock);
 	kfree(migf);
 	return 0;
@@ -308,7 +309,7 @@ virtiovf_get_data_buff_from_pos(struct virtiovf_migration_file *migf,
 	bool found = false;
 
 	*end_of_data = false;
-	spin_lock_irq(&migf->list_lock);
+	mutex_lock(&migf->list_lock);
 	if (list_empty(&migf->buf_list)) {
 		*end_of_data = true;
 		goto end;
@@ -329,7 +330,7 @@ virtiovf_get_data_buff_from_pos(struct virtiovf_migration_file *migf,
 	migf->state = VIRTIOVF_MIGF_STATE_ERROR;
 
 end:
-	spin_unlock_irq(&migf->list_lock);
+	mutex_unlock(&migf->list_lock);
 	return found ? buf : NULL;
 }
 
@@ -369,10 +370,10 @@ static ssize_t virtiovf_buf_read(struct virtiovf_data_buffer *vhca_buf,
 	}
 
 	if (*pos >= vhca_buf->start_pos + vhca_buf->length) {
-		spin_lock_irq(&vhca_buf->migf->list_lock);
+		mutex_lock(&vhca_buf->migf->list_lock);
 		list_del_init(&vhca_buf->buf_elm);
 		list_add_tail(&vhca_buf->buf_elm, &vhca_buf->migf->avail_list);
-		spin_unlock_irq(&vhca_buf->migf->list_lock);
+		mutex_unlock(&vhca_buf->migf->list_lock);
 	}
 
 	return done;
@@ -554,9 +555,9 @@ virtiovf_add_buf_header(struct virtiovf_data_buffer *header_buf,
 	header_buf->length = sizeof(header);
 	header_buf->start_pos = header_buf->migf->max_pos;
 	migf->max_pos += header_buf->length;
-	spin_lock_irq(&migf->list_lock);
+	mutex_lock(&migf->list_lock);
 	list_add_tail(&header_buf->buf_elm, &migf->buf_list);
-	spin_unlock_irq(&migf->list_lock);
+	mutex_unlock(&migf->list_lock);
 	return 0;
 }
 
@@ -621,9 +622,9 @@ virtiovf_read_device_context_chunk(struct virtiovf_migration_file *migf,
 
 	buf->start_pos = buf->migf->max_pos;
 	migf->max_pos += buf->length;
-	spin_lock(&migf->list_lock);
+	mutex_lock(&migf->list_lock);
 	list_add_tail(&buf->buf_elm, &migf->buf_list);
-	spin_unlock_irq(&migf->list_lock);
+	mutex_unlock(&migf->list_lock);
 	return 0;
 
 out_header:
@@ -692,7 +693,7 @@ virtiovf_pci_save_device_data(struct virtiovf_pci_core_device *virtvdev,
 	mutex_init(&migf->lock);
 	INIT_LIST_HEAD(&migf->buf_list);
 	INIT_LIST_HEAD(&migf->avail_list);
-	spin_lock_init(&migf->list_lock);
+	mutex_init(&migf->list_lock);
 	migf->virtvdev = virtvdev;
 
 	lockdep_assert_held(&virtvdev->state_mutex);
@@ -1082,7 +1083,7 @@ virtiovf_pci_resume_device_data(struct virtiovf_pci_core_device *virtvdev)
 	mutex_init(&migf->lock);
 	INIT_LIST_HEAD(&migf->buf_list);
 	INIT_LIST_HEAD(&migf->avail_list);
-	spin_lock_init(&migf->list_lock);
+	mutex_init(&migf->list_lock);
 
 	buf = virtiovf_alloc_data_buffer(migf, VIRTIOVF_TARGET_INITIAL_BUF_SIZE);
 	if (IS_ERR(buf)) {
-- 
2.51.0


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH 2/4] vfio/virtio: Use guard() for list_lock where applicable
  2026-04-14 20:06 [PATCH 0/4] vfio/virtio: Fix list_lock type and modernize locking Alex Williamson
  2026-04-14 20:06 ` [PATCH 1/4] vfio/virtio: Convert list_lock from spinlock to mutex Alex Williamson
@ 2026-04-14 20:06 ` Alex Williamson
  2026-04-14 20:06 ` [PATCH 3/4] vfio/virtio: Use guard() for migf->lock " Alex Williamson
                   ` (2 subsequent siblings)
  4 siblings, 0 replies; 8+ messages in thread
From: Alex Williamson @ 2026-04-14 20:06 UTC (permalink / raw)
  To: alex
  Cc: Alex Williamson, yishaih, kvm, guojinhui.liam, linux-kernel,
	virtualization

Convert list_lock mutex acquisitions to use guard() and scoped_guard()
where the lock scope aligns with the function or block scope.  This
simplifies virtiovf_get_data_buff_from_pos() by replacing goto-based
unwinding with direct returns.

Assisted-by: Claude:claude-opus-4-6
Signed-off-by: Alex Williamson <alex.williamson@nvidia.com>
---
 drivers/vfio/pci/virtio/migrate.c | 37 +++++++++++++------------------
 1 file changed, 16 insertions(+), 21 deletions(-)

diff --git a/drivers/vfio/pci/virtio/migrate.c b/drivers/vfio/pci/virtio/migrate.c
index 15fcd936528b..058d359c9b5a 100644
--- a/drivers/vfio/pci/virtio/migrate.c
+++ b/drivers/vfio/pci/virtio/migrate.c
@@ -142,9 +142,8 @@ virtiovf_alloc_data_buffer(struct virtiovf_migration_file *migf, size_t length)
 
 static void virtiovf_put_data_buffer(struct virtiovf_data_buffer *buf)
 {
-	mutex_lock(&buf->migf->list_lock);
+	guard(mutex)(&buf->migf->list_lock);
 	list_add_tail(&buf->buf_elm, &buf->migf->avail_list);
-	mutex_unlock(&buf->migf->list_lock);
 }
 
 static int
@@ -306,32 +305,27 @@ virtiovf_get_data_buff_from_pos(struct virtiovf_migration_file *migf,
 				loff_t pos, bool *end_of_data)
 {
 	struct virtiovf_data_buffer *buf;
-	bool found = false;
 
 	*end_of_data = false;
-	mutex_lock(&migf->list_lock);
+	guard(mutex)(&migf->list_lock);
+
 	if (list_empty(&migf->buf_list)) {
 		*end_of_data = true;
-		goto end;
+		return NULL;
 	}
 
 	buf = list_first_entry(&migf->buf_list, struct virtiovf_data_buffer,
 			       buf_elm);
 	if (pos >= buf->start_pos &&
-	    pos < buf->start_pos + buf->length) {
-		found = true;
-		goto end;
-	}
+	    pos < buf->start_pos + buf->length)
+		return buf;
 
 	/*
 	 * As we use a stream based FD we may expect having the data always
 	 * on first chunk
 	 */
 	migf->state = VIRTIOVF_MIGF_STATE_ERROR;
-
-end:
-	mutex_unlock(&migf->list_lock);
-	return found ? buf : NULL;
+	return NULL;
 }
 
 static ssize_t virtiovf_buf_read(struct virtiovf_data_buffer *vhca_buf,
@@ -370,10 +364,9 @@ static ssize_t virtiovf_buf_read(struct virtiovf_data_buffer *vhca_buf,
 	}
 
 	if (*pos >= vhca_buf->start_pos + vhca_buf->length) {
-		mutex_lock(&vhca_buf->migf->list_lock);
+		guard(mutex)(&vhca_buf->migf->list_lock);
 		list_del_init(&vhca_buf->buf_elm);
 		list_add_tail(&vhca_buf->buf_elm, &vhca_buf->migf->avail_list);
-		mutex_unlock(&vhca_buf->migf->list_lock);
 	}
 
 	return done;
@@ -555,9 +548,10 @@ virtiovf_add_buf_header(struct virtiovf_data_buffer *header_buf,
 	header_buf->length = sizeof(header);
 	header_buf->start_pos = header_buf->migf->max_pos;
 	migf->max_pos += header_buf->length;
-	mutex_lock(&migf->list_lock);
-	list_add_tail(&header_buf->buf_elm, &migf->buf_list);
-	mutex_unlock(&migf->list_lock);
+
+	scoped_guard(mutex, &migf->list_lock)
+		list_add_tail(&header_buf->buf_elm, &migf->buf_list);
+
 	return 0;
 }
 
@@ -622,9 +616,10 @@ virtiovf_read_device_context_chunk(struct virtiovf_migration_file *migf,
 
 	buf->start_pos = buf->migf->max_pos;
 	migf->max_pos += buf->length;
-	mutex_lock(&migf->list_lock);
-	list_add_tail(&buf->buf_elm, &migf->buf_list);
-	mutex_unlock(&migf->list_lock);
+
+	scoped_guard(mutex, &migf->list_lock)
+		list_add_tail(&buf->buf_elm, &migf->buf_list);
+
 	return 0;
 
 out_header:
-- 
2.51.0


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH 3/4] vfio/virtio: Use guard() for migf->lock where applicable
  2026-04-14 20:06 [PATCH 0/4] vfio/virtio: Fix list_lock type and modernize locking Alex Williamson
  2026-04-14 20:06 ` [PATCH 1/4] vfio/virtio: Convert list_lock from spinlock to mutex Alex Williamson
  2026-04-14 20:06 ` [PATCH 2/4] vfio/virtio: Use guard() for list_lock where applicable Alex Williamson
@ 2026-04-14 20:06 ` Alex Williamson
  2026-04-14 20:06 ` [PATCH 4/4] vfio/virtio: Use guard() for bar_mutex in legacy I/O Alex Williamson
  2026-04-15 15:12 ` [PATCH 0/4] vfio/virtio: Fix list_lock type and modernize locking Yishai Hadas
  4 siblings, 0 replies; 8+ messages in thread
From: Alex Williamson @ 2026-04-14 20:06 UTC (permalink / raw)
  To: alex
  Cc: Alex Williamson, yishaih, kvm, guojinhui.liam, linux-kernel,
	virtualization

Convert migf->lock acquisitions in virtiovf_disable_fd() and
virtiovf_save_read() to use guard().  In virtiovf_save_read() this
eliminates the out_unlock label and multiple goto paths by allowing
direct returns, and removes the need for the done variable to double
as an error carrier.

Assisted-by: Claude:claude-opus-4-6
Signed-off-by: Alex Williamson <alex.williamson@nvidia.com>
---
 drivers/vfio/pci/virtio/migrate.c | 40 +++++++++++--------------------
 1 file changed, 14 insertions(+), 26 deletions(-)

diff --git a/drivers/vfio/pci/virtio/migrate.c b/drivers/vfio/pci/virtio/migrate.c
index 058d359c9b5a..80ca8a1b625a 100644
--- a/drivers/vfio/pci/virtio/migrate.c
+++ b/drivers/vfio/pci/virtio/migrate.c
@@ -224,10 +224,9 @@ static void virtiovf_clean_migf_resources(struct virtiovf_migration_file *migf)
 
 static void virtiovf_disable_fd(struct virtiovf_migration_file *migf)
 {
-	mutex_lock(&migf->lock);
+	guard(mutex)(&migf->lock);
 	migf->state = VIRTIOVF_MIGF_STATE_ERROR;
 	migf->filp->f_pos = 0;
-	mutex_unlock(&migf->lock);
 }
 
 static void virtiovf_disable_fds(struct virtiovf_pci_core_device *virtvdev)
@@ -385,11 +384,10 @@ static ssize_t virtiovf_save_read(struct file *filp, char __user *buf, size_t le
 		return -ESPIPE;
 	pos = &filp->f_pos;
 
-	mutex_lock(&migf->lock);
-	if (migf->state == VIRTIOVF_MIGF_STATE_ERROR) {
-		done = -ENODEV;
-		goto out_unlock;
-	}
+	guard(mutex)(&migf->lock);
+
+	if (migf->state == VIRTIOVF_MIGF_STATE_ERROR)
+		return -ENODEV;
 
 	while (len) {
 		ssize_t count;
@@ -398,34 +396,24 @@ static ssize_t virtiovf_save_read(struct file *filp, char __user *buf, size_t le
 		if (first_loop_call) {
 			first_loop_call = false;
 			/* Temporary end of file as part of PRE_COPY */
-			if (end_of_data && migf->state == VIRTIOVF_MIGF_STATE_PRECOPY) {
-				done = -ENOMSG;
-				goto out_unlock;
-			}
-			if (end_of_data && migf->state != VIRTIOVF_MIGF_STATE_COMPLETE) {
-				done = -EINVAL;
-				goto out_unlock;
-			}
+			if (end_of_data && migf->state == VIRTIOVF_MIGF_STATE_PRECOPY)
+				return -ENOMSG;
+			if (end_of_data && migf->state != VIRTIOVF_MIGF_STATE_COMPLETE)
+				return -EINVAL;
 		}
 
 		if (end_of_data)
-			goto out_unlock;
+			return done;
 
-		if (!vhca_buf) {
-			done = -EINVAL;
-			goto out_unlock;
-		}
+		if (!vhca_buf)
+			return -EINVAL;
 
 		count = virtiovf_buf_read(vhca_buf, &buf, &len, pos);
-		if (count < 0) {
-			done = count;
-			goto out_unlock;
-		}
+		if (count < 0)
+			return count;
 		done += count;
 	}
 
-out_unlock:
-	mutex_unlock(&migf->lock);
 	return done;
 }
 
-- 
2.51.0


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH 4/4] vfio/virtio: Use guard() for bar_mutex in legacy I/O
  2026-04-14 20:06 [PATCH 0/4] vfio/virtio: Fix list_lock type and modernize locking Alex Williamson
                   ` (2 preceding siblings ...)
  2026-04-14 20:06 ` [PATCH 3/4] vfio/virtio: Use guard() for migf->lock " Alex Williamson
@ 2026-04-14 20:06 ` Alex Williamson
  2026-04-15 15:12 ` [PATCH 0/4] vfio/virtio: Fix list_lock type and modernize locking Yishai Hadas
  4 siblings, 0 replies; 8+ messages in thread
From: Alex Williamson @ 2026-04-14 20:06 UTC (permalink / raw)
  To: alex
  Cc: Alex Williamson, yishaih, kvm, guojinhui.liam, linux-kernel,
	virtualization

Convert the bar_mutex acquisition in virtiovf_issue_legacy_rw_cmd()
to use guard(), eliminating the out label and goto-based error paths
in favor of direct returns.

Assisted-by: Claude:claude-opus-4-6
Signed-off-by: Alex Williamson <alex.williamson@nvidia.com>
---
 drivers/vfio/pci/virtio/legacy_io.c | 17 ++++++++---------
 1 file changed, 8 insertions(+), 9 deletions(-)

diff --git a/drivers/vfio/pci/virtio/legacy_io.c b/drivers/vfio/pci/virtio/legacy_io.c
index 1ed349a55629..f022301e60d6 100644
--- a/drivers/vfio/pci/virtio/legacy_io.c
+++ b/drivers/vfio/pci/virtio/legacy_io.c
@@ -34,7 +34,9 @@ virtiovf_issue_legacy_rw_cmd(struct virtiovf_pci_core_device *virtvdev,
 	common = pos < VIRTIO_PCI_CONFIG_OFF(msix_enabled);
 	/* offset within the relevant configuration area */
 	offset = common ? pos : pos - VIRTIO_PCI_CONFIG_OFF(msix_enabled);
-	mutex_lock(&virtvdev->bar_mutex);
+
+	guard(mutex)(&virtvdev->bar_mutex);
+
 	if (read) {
 		if (common)
 			ret = virtio_pci_admin_legacy_common_io_read(pdev, offset,
@@ -43,14 +45,12 @@ virtiovf_issue_legacy_rw_cmd(struct virtiovf_pci_core_device *virtvdev,
 			ret = virtio_pci_admin_legacy_device_io_read(pdev, offset,
 					count, bar0_buf + pos);
 		if (ret)
-			goto out;
+			return ret;
 		if (copy_to_user(buf, bar0_buf + pos, count))
-			ret = -EFAULT;
+			return -EFAULT;
 	} else {
-		if (copy_from_user(bar0_buf + pos, buf, count)) {
-			ret = -EFAULT;
-			goto out;
-		}
+		if (copy_from_user(bar0_buf + pos, buf, count))
+			return -EFAULT;
 
 		if (common)
 			ret = virtio_pci_admin_legacy_common_io_write(pdev, offset,
@@ -59,8 +59,7 @@ virtiovf_issue_legacy_rw_cmd(struct virtiovf_pci_core_device *virtvdev,
 			ret = virtio_pci_admin_legacy_device_io_write(pdev, offset,
 					count, bar0_buf + pos);
 	}
-out:
-	mutex_unlock(&virtvdev->bar_mutex);
+
 	return ret;
 }
 
-- 
2.51.0


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [PATCH 0/4] vfio/virtio: Fix list_lock type and modernize locking
  2026-04-14 20:06 [PATCH 0/4] vfio/virtio: Fix list_lock type and modernize locking Alex Williamson
                   ` (3 preceding siblings ...)
  2026-04-14 20:06 ` [PATCH 4/4] vfio/virtio: Use guard() for bar_mutex in legacy I/O Alex Williamson
@ 2026-04-15 15:12 ` Yishai Hadas
  2026-04-15 15:39   ` Alex Williamson
  4 siblings, 1 reply; 8+ messages in thread
From: Yishai Hadas @ 2026-04-15 15:12 UTC (permalink / raw)
  To: Alex Williamson, alex
  Cc: kvm, guojinhui.liam, linux-kernel, virtualization, yishaih

On 14/04/2026 23:06, Alex Williamson wrote:
> Jinhui Guo reported a mismatched spin_lock()/spin_unlock_irq() pair
> in virtiovf_read_device_context_chunk() where spin_unlock_irq() would
> unconditionally enable interrupts despite spin_lock() never having
> disabled them.  On closer inspection, the list_lock spinlock with IRQ
> disabling was copied from the mlx5 variant driver where a hardirq
> completion callback justifies it, but the virtio driver has no
> interrupt context usage of list_lock.  Patch 1 converts list_lock to
> a mutex, fixing the mismatch and aligning with peer vfio-pci variant
> drivers.

Alex,
How about staying with spin_lock but without the 'irq' variant, instead 
of replacing to mutex ?

The scope of the lock is very small which can fit spin.
We may potentially get a performance degradation compared to mutex as 
part of the hot path of STOP_COPY where this lock is used.

Note:
I don't see that other peer vfio-pci variant drivers maintain a list of 
buffers as of this driver, unless I missed that.

> Patch 2 converts the list_lock acquisitions to guard()/scoped_guard()
> where the lock scope aligns naturally with function or block scope.
> 

This patch might not be applicable if we'll stay with spin_lock.

> Patches 3 and 4 extend the same guard() conversion to the remaining
> two mutexes in the driver (migf->lock and bar_mutex).  These are
> relatively independent of the list_lock fix but complete the
> conversion across the driver.  Thanks,
> 

Those 2 patches seem fine to me.
Reviewed-by: Yishai Hadas <yishaih@nvidia.com>

Yishai

> Alex
> 
> Alex Williamson (4):
>    vfio/virtio: Convert list_lock from spinlock to mutex
>    vfio/virtio: Use guard() for list_lock where applicable
>    vfio/virtio: Use guard() for migf->lock where applicable
>    vfio/virtio: Use guard() for bar_mutex in legacy I/O
> 
>   drivers/vfio/pci/virtio/common.h    |  2 +-
>   drivers/vfio/pci/virtio/legacy_io.c | 17 +++---
>   drivers/vfio/pci/virtio/migrate.c   | 90 ++++++++++++-----------------
>   3 files changed, 46 insertions(+), 63 deletions(-)
> 


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH 0/4] vfio/virtio: Fix list_lock type and modernize locking
  2026-04-15 15:12 ` [PATCH 0/4] vfio/virtio: Fix list_lock type and modernize locking Yishai Hadas
@ 2026-04-15 15:39   ` Alex Williamson
  2026-04-15 17:23     ` Yishai Hadas
  0 siblings, 1 reply; 8+ messages in thread
From: Alex Williamson @ 2026-04-15 15:39 UTC (permalink / raw)
  To: Yishai Hadas; +Cc: alex, kvm, guojinhui.liam, linux-kernel, virtualization

On Wed, 15 Apr 2026 18:12:50 +0300
Yishai Hadas <yishaih@nvidia.com> wrote:

> On 14/04/2026 23:06, Alex Williamson wrote:
> > Jinhui Guo reported a mismatched spin_lock()/spin_unlock_irq() pair
> > in virtiovf_read_device_context_chunk() where spin_unlock_irq() would
> > unconditionally enable interrupts despite spin_lock() never having
> > disabled them.  On closer inspection, the list_lock spinlock with IRQ
> > disabling was copied from the mlx5 variant driver where a hardirq
> > completion callback justifies it, but the virtio driver has no
> > interrupt context usage of list_lock.  Patch 1 converts list_lock to
> > a mutex, fixing the mismatch and aligning with peer vfio-pci variant
> > drivers.  
> 
> Alex,
> How about staying with spin_lock but without the 'irq' variant, instead 
> of replacing to mutex ?
> 
> The scope of the lock is very small which can fit spin.
> We may potentially get a performance degradation compared to mutex as 
> part of the hot path of STOP_COPY where this lock is used.
> 
> Note:
> I don't see that other peer vfio-pci variant drivers maintain a list of 
> buffers as of this driver, unless I missed that.

The argument doesn't make sense to me, we use a spinlock if we have an
operation that cannot be preempted and a spinlock-irq if we need to
manage that from a hardirq context.  I think we just need mutual
exclusion here.  Stealing the CPU because you want the absolute best
performance for a little bit of list manipulation is not valid
justification.

Documentation/locking/mutex-design.rst:

  When to use mutexes
  -------------------

  Unless the strict semantics of mutexes are unsuitable and/or the critical
  region prevents the lock from being shared, always prefer them to any other
  locking primitive.

Can you cite specific requirements for a spinlock in the critical
section here?  Thanks,

Alex
 
> > Patch 2 converts the list_lock acquisitions to guard()/scoped_guard()
> > where the lock scope aligns naturally with function or block scope.
> >   
> 
> This patch might not be applicable if we'll stay with spin_lock.
> 
> > Patches 3 and 4 extend the same guard() conversion to the remaining
> > two mutexes in the driver (migf->lock and bar_mutex).  These are
> > relatively independent of the list_lock fix but complete the
> > conversion across the driver.  Thanks,
> >   
> 
> Those 2 patches seem fine to me.
> Reviewed-by: Yishai Hadas <yishaih@nvidia.com>
> 
> Yishai
> 
> > Alex
> > 
> > Alex Williamson (4):
> >    vfio/virtio: Convert list_lock from spinlock to mutex
> >    vfio/virtio: Use guard() for list_lock where applicable
> >    vfio/virtio: Use guard() for migf->lock where applicable
> >    vfio/virtio: Use guard() for bar_mutex in legacy I/O
> > 
> >   drivers/vfio/pci/virtio/common.h    |  2 +-
> >   drivers/vfio/pci/virtio/legacy_io.c | 17 +++---
> >   drivers/vfio/pci/virtio/migrate.c   | 90 ++++++++++++-----------------
> >   3 files changed, 46 insertions(+), 63 deletions(-)
> >   
> 


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH 0/4] vfio/virtio: Fix list_lock type and modernize locking
  2026-04-15 15:39   ` Alex Williamson
@ 2026-04-15 17:23     ` Yishai Hadas
  0 siblings, 0 replies; 8+ messages in thread
From: Yishai Hadas @ 2026-04-15 17:23 UTC (permalink / raw)
  To: Alex Williamson; +Cc: alex, kvm, guojinhui.liam, linux-kernel, virtualization

On 15/04/2026 18:39, Alex Williamson wrote:
> On Wed, 15 Apr 2026 18:12:50 +0300
> Yishai Hadas <yishaih@nvidia.com> wrote:
> 
>> On 14/04/2026 23:06, Alex Williamson wrote:
>>> Jinhui Guo reported a mismatched spin_lock()/spin_unlock_irq() pair
>>> in virtiovf_read_device_context_chunk() where spin_unlock_irq() would
>>> unconditionally enable interrupts despite spin_lock() never having
>>> disabled them.  On closer inspection, the list_lock spinlock with IRQ
>>> disabling was copied from the mlx5 variant driver where a hardirq
>>> completion callback justifies it, but the virtio driver has no
>>> interrupt context usage of list_lock.  Patch 1 converts list_lock to
>>> a mutex, fixing the mismatch and aligning with peer vfio-pci variant
>>> drivers.
>>
>> Alex,
>> How about staying with spin_lock but without the 'irq' variant, instead
>> of replacing to mutex ?
>>
>> The scope of the lock is very small which can fit spin.
>> We may potentially get a performance degradation compared to mutex as
>> part of the hot path of STOP_COPY where this lock is used.
>>
>> Note:
>> I don't see that other peer vfio-pci variant drivers maintain a list of
>> buffers as of this driver, unless I missed that.
> 
> The argument doesn't make sense to me, we use a spinlock if we have an
> operation that cannot be preempted and a spinlock-irq if we need to
> manage that from a hardirq context.  I think we just need mutual
> exclusion here.  Stealing the CPU because you want the absolute best
> performance for a little bit of list manipulation is not valid
> justification.
> 
> Documentation/locking/mutex-design.rst:
> 
>    When to use mutexes
>    -------------------
> 
>    Unless the strict semantics of mutexes are unsuitable and/or the critical
>    region prevents the lock from being shared, always prefer them to any other
>    locking primitive.
> 
> Can you cite specific requirements for a spinlock in the critical
> section here?  Thanks,
> 

I see

In our use case, we mostly expect a single task handling the migration, 
so real contention is unlikely and we don’t anticipate the overhead of 
sleeping and waking to be triggered.

That said, I’m fine with using a mutex here as of your patch.

So,
Reviewed-by: Yishai Hadas <yishaih@nvidia.com>

For the full series.

Yishai

>   
>>> Patch 2 converts the list_lock acquisitions to guard()/scoped_guard()
>>> where the lock scope aligns naturally with function or block scope.
>>>    
>>
>> This patch might not be applicable if we'll stay with spin_lock.
>>
>>> Patches 3 and 4 extend the same guard() conversion to the remaining
>>> two mutexes in the driver (migf->lock and bar_mutex).  These are
>>> relatively independent of the list_lock fix but complete the
>>> conversion across the driver.  Thanks,
>>>    
>>
>> Those 2 patches seem fine to me.
>> Reviewed-by: Yishai Hadas <yishaih@nvidia.com>
>>
>> Yishai
>>
>>> Alex
>>>
>>> Alex Williamson (4):
>>>     vfio/virtio: Convert list_lock from spinlock to mutex
>>>     vfio/virtio: Use guard() for list_lock where applicable
>>>     vfio/virtio: Use guard() for migf->lock where applicable
>>>     vfio/virtio: Use guard() for bar_mutex in legacy I/O
>>>
>>>    drivers/vfio/pci/virtio/common.h    |  2 +-
>>>    drivers/vfio/pci/virtio/legacy_io.c | 17 +++---
>>>    drivers/vfio/pci/virtio/migrate.c   | 90 ++++++++++++-----------------
>>>    3 files changed, 46 insertions(+), 63 deletions(-)
>>>    
>>
> 


^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2026-04-15 17:23 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-04-14 20:06 [PATCH 0/4] vfio/virtio: Fix list_lock type and modernize locking Alex Williamson
2026-04-14 20:06 ` [PATCH 1/4] vfio/virtio: Convert list_lock from spinlock to mutex Alex Williamson
2026-04-14 20:06 ` [PATCH 2/4] vfio/virtio: Use guard() for list_lock where applicable Alex Williamson
2026-04-14 20:06 ` [PATCH 3/4] vfio/virtio: Use guard() for migf->lock " Alex Williamson
2026-04-14 20:06 ` [PATCH 4/4] vfio/virtio: Use guard() for bar_mutex in legacy I/O Alex Williamson
2026-04-15 15:12 ` [PATCH 0/4] vfio/virtio: Fix list_lock type and modernize locking Yishai Hadas
2026-04-15 15:39   ` Alex Williamson
2026-04-15 17:23     ` Yishai Hadas

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox