The Linux Kernel Mailing List
 help / color / mirror / Atom feed
From: Keith Busch <kbusch@kernel.org>
To: Bjorn Helgaas <helgaas@kernel.org>
Cc: Sungwoo Kim <iam@sung-woo.kim>, Jens Axboe <axboe@kernel.dk>,
	Christoph Hellwig <hch@lst.de>, Sagi Grimberg <sagi@grimberg.me>,
	Chao Shi <cshi008@fiu.edu>, Weidong Zhu <weizhu@fiu.edu>,
	Dave Tian <daveti@purdue.edu>,
	linux-nvme@lists.infradead.org, linux-kernel@vger.kernel.org
Subject: Re: [PATCH] Fix race bug in nvme_poll_irqdisable()
Date: Thu, 14 May 2026 08:22:39 -0600	[thread overview]
Message-ID: <agXar7qtrPkpKaK5@kbusch-mbp> (raw)
In-Reply-To: <20260513182736.GA324918@bhelgaas>

On Wed, May 13, 2026 at 01:27:36PM -0500, Bjorn Helgaas wrote:
> On Sat, Mar 07, 2026 at 02:46:36PM -0500, Sungwoo Kim wrote:
> > In the following scenario, pdev can be disabled between (1) and (3) by
> > (2). This sets pdev->msix_enabled = 0. Then, pci_irq_vector() will
> > return MSI-X IRQ(>15) for (1) whereas return INTx IRQ(<=15) for (2).
> > This causes IRQ warning because it tries to enable INTx IRQ that has
> > never been disabled before.
> > 
> > To fix this, save IRQ number into a local variable and ensure
> > disable_irq() and enable_irq() operate on the same IRQ number.
> > Even if pci_free_irq_vectors() frees the IRQ concurrently, disable_irq()
> > and enable_irq() on a stale IRQ number is still valid and safe, and the
> > depth accounting reamins balanced.
> > 
> > task 1:
> > nvme_poll_irqdisable()
> >   disable_irq(pci_irq_vector(pdev, nvmeq->cq_vector)) ...(1)
> >   enable_irq(pci_irq_vector(pdev, nvmeq->cq_vector))  ...(3)
> > 
> > task 2:
> > nvme_reset_work()
> >   nvme_dev_disable()
> >     pdev->msix_enable = 0;  ...(2)
> > ...
> 
> >  static void nvme_poll_irqdisable(struct nvme_queue *nvmeq)
> >  {
> >  	struct pci_dev *pdev = to_pci_dev(nvmeq->dev->dev);
> > +	int irq;
> >  
> >  	WARN_ON_ONCE(test_bit(NVMEQ_POLLED, &nvmeq->flags));
> >  
> > -	disable_irq(pci_irq_vector(pdev, nvmeq->cq_vector));
> > +	irq = pci_irq_vector(pdev, nvmeq->cq_vector);
> > +	disable_irq(irq);
> >  	spin_lock(&nvmeq->cq_poll_lock);
> >  	nvme_poll_cq(nvmeq, NULL);
> >  	spin_unlock(&nvmeq->cq_poll_lock);
> > -	enable_irq(pci_irq_vector(pdev, nvmeq->cq_vector));
> > +	enable_irq(irq);
> 
> An internal run of sashiko complained about this, and I think it's
> right.  As the commit log mentions, the cached IRQ number is stale if
> nvme_reset_work() frees all the vectors between (1) and (3).  It's
> likely the pci_alloc_irq_vectors() in nvme_pci_enable() will get the
> same IRQ number, but it would be a coincidence, and it doesn't feel
> like a good idea to rely on it.
> 
> First sashiko review:
> 
>   This commit caches the IRQ number in nvme_poll_irqdisable() to
>   ensure disable_irq() and enable_irq() operate on the same IRQ
>   number, preventing an unbalanced enable warning if the device is
>   concurrently disabled.
> 
>   If pci_free_irq_vectors() frees the IRQ concurrently, is it possible
>   for the IRQ number to be reallocated to a completely different
>   device before enable_irq() is called? Could this cause an unbalanced
>   enable or incorrectly unmask the interrupt for the new device?

Yeah, that looks legit. This should fix it:

---
diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
index 139a10cd687f9..34845d73cb3ab 100644
--- a/drivers/nvme/host/pci.c
+++ b/drivers/nvme/host/pci.c
@@ -1885,8 +1885,12 @@ static enum blk_eh_timer_return nvme_timeout(struct request *req)
 	 */
 	if (test_bit(NVMEQ_POLLED, &nvmeq->flags))
 		nvme_poll(req->mq_hctx, NULL);
-	else
-		nvme_poll_irqdisable(nvmeq);
+	else {
+		mutex_lock(&dev->shutdown_lock);
+		if (test_bit(NVMEQ_ENABLED, &nvmeq->flags))
+			nvme_poll_irqdisable(nvmeq);
+		mutex_unlock(&dev->shutdown_lock);
+	}
 
 	if (blk_mq_rq_state(req) != MQ_RQ_IN_FLIGHT) {
 		dev_warn(dev->ctrl.device,
--

  reply	other threads:[~2026-05-14 14:22 UTC|newest]

Thread overview: 3+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <20260307194636.2755443-2-iam@sung-woo.kim>
2026-05-13 18:27 ` [PATCH] Fix race bug in nvme_poll_irqdisable() Bjorn Helgaas
2026-05-14 14:22   ` Keith Busch [this message]
2026-05-14  7:56 ` Sagi Grimberg

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=agXar7qtrPkpKaK5@kbusch-mbp \
    --to=kbusch@kernel.org \
    --cc=axboe@kernel.dk \
    --cc=cshi008@fiu.edu \
    --cc=daveti@purdue.edu \
    --cc=hch@lst.de \
    --cc=helgaas@kernel.org \
    --cc=iam@sung-woo.kim \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-nvme@lists.infradead.org \
    --cc=sagi@grimberg.me \
    --cc=weizhu@fiu.edu \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox