The Linux Kernel Mailing List
 help / color / mirror / Atom feed
* Re: [PATCH] Fix race bug in nvme_poll_irqdisable()
       [not found] <20260307194636.2755443-2-iam@sung-woo.kim>
@ 2026-05-13 18:27 ` Bjorn Helgaas
  2026-05-14 14:22   ` Keith Busch
  2026-05-14  7:56 ` Sagi Grimberg
  1 sibling, 1 reply; 3+ messages in thread
From: Bjorn Helgaas @ 2026-05-13 18:27 UTC (permalink / raw)
  To: Sungwoo Kim
  Cc: Keith Busch, Jens Axboe, Christoph Hellwig, Sagi Grimberg,
	Chao Shi, Weidong Zhu, Dave Tian, linux-nvme, linux-kernel

On Sat, Mar 07, 2026 at 02:46:36PM -0500, Sungwoo Kim wrote:
> In the following scenario, pdev can be disabled between (1) and (3) by
> (2). This sets pdev->msix_enabled = 0. Then, pci_irq_vector() will
> return MSI-X IRQ(>15) for (1) whereas return INTx IRQ(<=15) for (2).
> This causes IRQ warning because it tries to enable INTx IRQ that has
> never been disabled before.
> 
> To fix this, save IRQ number into a local variable and ensure
> disable_irq() and enable_irq() operate on the same IRQ number.
> Even if pci_free_irq_vectors() frees the IRQ concurrently, disable_irq()
> and enable_irq() on a stale IRQ number is still valid and safe, and the
> depth accounting reamins balanced.
> 
> task 1:
> nvme_poll_irqdisable()
>   disable_irq(pci_irq_vector(pdev, nvmeq->cq_vector)) ...(1)
>   enable_irq(pci_irq_vector(pdev, nvmeq->cq_vector))  ...(3)
> 
> task 2:
> nvme_reset_work()
>   nvme_dev_disable()
>     pdev->msix_enable = 0;  ...(2)
> ...

>  static void nvme_poll_irqdisable(struct nvme_queue *nvmeq)
>  {
>  	struct pci_dev *pdev = to_pci_dev(nvmeq->dev->dev);
> +	int irq;
>  
>  	WARN_ON_ONCE(test_bit(NVMEQ_POLLED, &nvmeq->flags));
>  
> -	disable_irq(pci_irq_vector(pdev, nvmeq->cq_vector));
> +	irq = pci_irq_vector(pdev, nvmeq->cq_vector);
> +	disable_irq(irq);
>  	spin_lock(&nvmeq->cq_poll_lock);
>  	nvme_poll_cq(nvmeq, NULL);
>  	spin_unlock(&nvmeq->cq_poll_lock);
> -	enable_irq(pci_irq_vector(pdev, nvmeq->cq_vector));
> +	enable_irq(irq);

An internal run of sashiko complained about this, and I think it's
right.  As the commit log mentions, the cached IRQ number is stale if
nvme_reset_work() frees all the vectors between (1) and (3).  It's
likely the pci_alloc_irq_vectors() in nvme_pci_enable() will get the
same IRQ number, but it would be a coincidence, and it doesn't feel
like a good idea to rely on it.

First sashiko review:

  This commit caches the IRQ number in nvme_poll_irqdisable() to
  ensure disable_irq() and enable_irq() operate on the same IRQ
  number, preventing an unbalanced enable warning if the device is
  concurrently disabled.

  If pci_free_irq_vectors() frees the IRQ concurrently, is it possible
  for the IRQ number to be reallocated to a completely different
  device before enable_irq() is called? Could this cause an unbalanced
  enable or incorrectly unmask the interrupt for the new device?

Second sashiko review:

  The commit caches the return value of pci_irq_vector() into a local
  variable to ensure disable_irq() and enable_irq() operate on the
  same IRQ number, preventing an unbalanced enable warning when
  pdev->msix_enabled is cleared concurrently.

  If pdev->msix_enabled is cleared concurrently and nvmeq->cq_vector >
  0, pci_irq_vector() will return -EINVAL. Since disable_irq() takes
  an unsigned int, does this result in passing a very large unsigned
  number (like 0xffffffea) to the IRQ subsystem?

  The commit message notes that operating on a stale IRQ number is
  valid and safe. However, if pci_free_irq_vectors() frees the MSI-X
  vector concurrently, couldn't this cached irq number be recycled and
  reallocated to a completely different device before enable_irq(irq)
  executes? If the new device had explicitly disabled its IRQ, does
  this code inadvertently enable it, breaking the other device's
  synchronization?

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [PATCH] Fix race bug in nvme_poll_irqdisable()
       [not found] <20260307194636.2755443-2-iam@sung-woo.kim>
  2026-05-13 18:27 ` [PATCH] Fix race bug in nvme_poll_irqdisable() Bjorn Helgaas
@ 2026-05-14  7:56 ` Sagi Grimberg
  1 sibling, 0 replies; 3+ messages in thread
From: Sagi Grimberg @ 2026-05-14  7:56 UTC (permalink / raw)
  To: Sungwoo Kim, Keith Busch, Jens Axboe, Christoph Hellwig
  Cc: Chao Shi, Weidong Zhu, Dave Tian, linux-nvme, linux-kernel

Reviewed-by: Sagi Grimberg <sagi@grimberg.me>

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [PATCH] Fix race bug in nvme_poll_irqdisable()
  2026-05-13 18:27 ` [PATCH] Fix race bug in nvme_poll_irqdisable() Bjorn Helgaas
@ 2026-05-14 14:22   ` Keith Busch
  0 siblings, 0 replies; 3+ messages in thread
From: Keith Busch @ 2026-05-14 14:22 UTC (permalink / raw)
  To: Bjorn Helgaas
  Cc: Sungwoo Kim, Jens Axboe, Christoph Hellwig, Sagi Grimberg,
	Chao Shi, Weidong Zhu, Dave Tian, linux-nvme, linux-kernel

On Wed, May 13, 2026 at 01:27:36PM -0500, Bjorn Helgaas wrote:
> On Sat, Mar 07, 2026 at 02:46:36PM -0500, Sungwoo Kim wrote:
> > In the following scenario, pdev can be disabled between (1) and (3) by
> > (2). This sets pdev->msix_enabled = 0. Then, pci_irq_vector() will
> > return MSI-X IRQ(>15) for (1) whereas return INTx IRQ(<=15) for (2).
> > This causes IRQ warning because it tries to enable INTx IRQ that has
> > never been disabled before.
> > 
> > To fix this, save IRQ number into a local variable and ensure
> > disable_irq() and enable_irq() operate on the same IRQ number.
> > Even if pci_free_irq_vectors() frees the IRQ concurrently, disable_irq()
> > and enable_irq() on a stale IRQ number is still valid and safe, and the
> > depth accounting reamins balanced.
> > 
> > task 1:
> > nvme_poll_irqdisable()
> >   disable_irq(pci_irq_vector(pdev, nvmeq->cq_vector)) ...(1)
> >   enable_irq(pci_irq_vector(pdev, nvmeq->cq_vector))  ...(3)
> > 
> > task 2:
> > nvme_reset_work()
> >   nvme_dev_disable()
> >     pdev->msix_enable = 0;  ...(2)
> > ...
> 
> >  static void nvme_poll_irqdisable(struct nvme_queue *nvmeq)
> >  {
> >  	struct pci_dev *pdev = to_pci_dev(nvmeq->dev->dev);
> > +	int irq;
> >  
> >  	WARN_ON_ONCE(test_bit(NVMEQ_POLLED, &nvmeq->flags));
> >  
> > -	disable_irq(pci_irq_vector(pdev, nvmeq->cq_vector));
> > +	irq = pci_irq_vector(pdev, nvmeq->cq_vector);
> > +	disable_irq(irq);
> >  	spin_lock(&nvmeq->cq_poll_lock);
> >  	nvme_poll_cq(nvmeq, NULL);
> >  	spin_unlock(&nvmeq->cq_poll_lock);
> > -	enable_irq(pci_irq_vector(pdev, nvmeq->cq_vector));
> > +	enable_irq(irq);
> 
> An internal run of sashiko complained about this, and I think it's
> right.  As the commit log mentions, the cached IRQ number is stale if
> nvme_reset_work() frees all the vectors between (1) and (3).  It's
> likely the pci_alloc_irq_vectors() in nvme_pci_enable() will get the
> same IRQ number, but it would be a coincidence, and it doesn't feel
> like a good idea to rely on it.
> 
> First sashiko review:
> 
>   This commit caches the IRQ number in nvme_poll_irqdisable() to
>   ensure disable_irq() and enable_irq() operate on the same IRQ
>   number, preventing an unbalanced enable warning if the device is
>   concurrently disabled.
> 
>   If pci_free_irq_vectors() frees the IRQ concurrently, is it possible
>   for the IRQ number to be reallocated to a completely different
>   device before enable_irq() is called? Could this cause an unbalanced
>   enable or incorrectly unmask the interrupt for the new device?

Yeah, that looks legit. This should fix it:

---
diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
index 139a10cd687f9..34845d73cb3ab 100644
--- a/drivers/nvme/host/pci.c
+++ b/drivers/nvme/host/pci.c
@@ -1885,8 +1885,12 @@ static enum blk_eh_timer_return nvme_timeout(struct request *req)
 	 */
 	if (test_bit(NVMEQ_POLLED, &nvmeq->flags))
 		nvme_poll(req->mq_hctx, NULL);
-	else
-		nvme_poll_irqdisable(nvmeq);
+	else {
+		mutex_lock(&dev->shutdown_lock);
+		if (test_bit(NVMEQ_ENABLED, &nvmeq->flags))
+			nvme_poll_irqdisable(nvmeq);
+		mutex_unlock(&dev->shutdown_lock);
+	}
 
 	if (blk_mq_rq_state(req) != MQ_RQ_IN_FLIGHT) {
 		dev_warn(dev->ctrl.device,
--

^ permalink raw reply related	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2026-05-14 14:22 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
     [not found] <20260307194636.2755443-2-iam@sung-woo.kim>
2026-05-13 18:27 ` [PATCH] Fix race bug in nvme_poll_irqdisable() Bjorn Helgaas
2026-05-14 14:22   ` Keith Busch
2026-05-14  7:56 ` Sagi Grimberg

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox