Linux-NVME Archive on lore.kernel.org
 help / color / mirror / Atom feed
From: Bjorn Helgaas <helgaas@kernel.org>
To: Sungwoo Kim <iam@sung-woo.kim>
Cc: Keith Busch <kbusch@kernel.org>, Jens Axboe <axboe@kernel.dk>,
	Christoph Hellwig <hch@lst.de>, Sagi Grimberg <sagi@grimberg.me>,
	Chao Shi <cshi008@fiu.edu>, Weidong Zhu <weizhu@fiu.edu>,
	Dave Tian <daveti@purdue.edu>,
	linux-nvme@lists.infradead.org, linux-kernel@vger.kernel.org
Subject: Re: [PATCH] Fix race bug in nvme_poll_irqdisable()
Date: Wed, 13 May 2026 13:27:36 -0500	[thread overview]
Message-ID: <20260513182736.GA324918@bhelgaas> (raw)
In-Reply-To: <20260307194636.2755443-2-iam@sung-woo.kim>

On Sat, Mar 07, 2026 at 02:46:36PM -0500, Sungwoo Kim wrote:
> In the following scenario, pdev can be disabled between (1) and (3) by
> (2). This sets pdev->msix_enabled = 0. Then, pci_irq_vector() will
> return MSI-X IRQ(>15) for (1) whereas return INTx IRQ(<=15) for (2).
> This causes IRQ warning because it tries to enable INTx IRQ that has
> never been disabled before.
> 
> To fix this, save IRQ number into a local variable and ensure
> disable_irq() and enable_irq() operate on the same IRQ number.
> Even if pci_free_irq_vectors() frees the IRQ concurrently, disable_irq()
> and enable_irq() on a stale IRQ number is still valid and safe, and the
> depth accounting reamins balanced.
> 
> task 1:
> nvme_poll_irqdisable()
>   disable_irq(pci_irq_vector(pdev, nvmeq->cq_vector)) ...(1)
>   enable_irq(pci_irq_vector(pdev, nvmeq->cq_vector))  ...(3)
> 
> task 2:
> nvme_reset_work()
>   nvme_dev_disable()
>     pdev->msix_enable = 0;  ...(2)
> ...

>  static void nvme_poll_irqdisable(struct nvme_queue *nvmeq)
>  {
>  	struct pci_dev *pdev = to_pci_dev(nvmeq->dev->dev);
> +	int irq;
>  
>  	WARN_ON_ONCE(test_bit(NVMEQ_POLLED, &nvmeq->flags));
>  
> -	disable_irq(pci_irq_vector(pdev, nvmeq->cq_vector));
> +	irq = pci_irq_vector(pdev, nvmeq->cq_vector);
> +	disable_irq(irq);
>  	spin_lock(&nvmeq->cq_poll_lock);
>  	nvme_poll_cq(nvmeq, NULL);
>  	spin_unlock(&nvmeq->cq_poll_lock);
> -	enable_irq(pci_irq_vector(pdev, nvmeq->cq_vector));
> +	enable_irq(irq);

An internal run of sashiko complained about this, and I think it's
right.  As the commit log mentions, the cached IRQ number is stale if
nvme_reset_work() frees all the vectors between (1) and (3).  It's
likely the pci_alloc_irq_vectors() in nvme_pci_enable() will get the
same IRQ number, but it would be a coincidence, and it doesn't feel
like a good idea to rely on it.

First sashiko review:

  This commit caches the IRQ number in nvme_poll_irqdisable() to
  ensure disable_irq() and enable_irq() operate on the same IRQ
  number, preventing an unbalanced enable warning if the device is
  concurrently disabled.

  If pci_free_irq_vectors() frees the IRQ concurrently, is it possible
  for the IRQ number to be reallocated to a completely different
  device before enable_irq() is called? Could this cause an unbalanced
  enable or incorrectly unmask the interrupt for the new device?

Second sashiko review:

  The commit caches the return value of pci_irq_vector() into a local
  variable to ensure disable_irq() and enable_irq() operate on the
  same IRQ number, preventing an unbalanced enable warning when
  pdev->msix_enabled is cleared concurrently.

  If pdev->msix_enabled is cleared concurrently and nvmeq->cq_vector >
  0, pci_irq_vector() will return -EINVAL. Since disable_irq() takes
  an unsigned int, does this result in passing a very large unsigned
  number (like 0xffffffea) to the IRQ subsystem?

  The commit message notes that operating on a stale IRQ number is
  valid and safe. However, if pci_free_irq_vectors() frees the MSI-X
  vector concurrently, couldn't this cached irq number be recycled and
  reallocated to a completely different device before enable_irq(irq)
  executes? If the new device had explicitly disabled its IRQ, does
  this code inadvertently enable it, breaking the other device's
  synchronization?


  parent reply	other threads:[~2026-05-13 18:27 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-03-07 19:46 [PATCH] Fix race bug in nvme_poll_irqdisable() Sungwoo Kim
2026-03-07 20:15 ` Sungwoo Kim
2026-03-10 13:57 ` Christoph Hellwig
2026-03-10 14:25 ` Keith Busch
2026-03-11 20:45   ` Sungwoo Kim
2026-05-13 18:27 ` Bjorn Helgaas [this message]
2026-05-14 14:22   ` Keith Busch
2026-05-14  7:56 ` Sagi Grimberg

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20260513182736.GA324918@bhelgaas \
    --to=helgaas@kernel.org \
    --cc=axboe@kernel.dk \
    --cc=cshi008@fiu.edu \
    --cc=daveti@purdue.edu \
    --cc=hch@lst.de \
    --cc=iam@sung-woo.kim \
    --cc=kbusch@kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-nvme@lists.infradead.org \
    --cc=sagi@grimberg.me \
    --cc=weizhu@fiu.edu \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox