public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Samiullah Khawaja <skhawaja@google.com>
To: Guanghui Feng <guanghuifeng@linux.alibaba.com>
Cc: baolu.lu@linux.intel.com, dwmw2@infradead.org, joro@8bytes.org,
	 will@kernel.org, robin.murphy@arm.com, iommu@lists.linux.dev,
	 linux-kernel@vger.kernel.org
Subject: Re: [PATCH v3] iommu/vt-d: fix intel iommu iotlb sync hardlockup and retry
Date: Fri, 6 Mar 2026 19:42:11 +0000	[thread overview]
Message-ID: <aast-ad3ZmSWh2Pl@google.com> (raw)
In-Reply-To: <20260306101516.3885775-1-guanghuifeng@linux.alibaba.com>

On Fri, Mar 06, 2026 at 06:15:16PM +0800, Guanghui Feng wrote:
>During the qi_check_fault process after an IOMMU ITE event,
>requests at odd-numbered positions in the queue are set to
>QI_ABORT, only satisfying single-request submissions. However,
>qi_submit_sync now supports multiple simultaneous submissions,
>and can't guarantee that the wait_desc will be at an odd-numbered
>position. Therefore, if an item times out, IOMMU can't re-initiate
>the request, resulting in an infinite polling wait.
>
>This patch modifies the process by setting the status of all requests
>already fetched by IOMMU and recorded as QI_IN_USE status (including
>wait_desc requests) to QI_ABORT, thus enabling multiple requests to
>be resubmitted.
>
>Signed-off-by: Guanghui Feng <guanghuifeng@linux.alibaba.com>
>Reviewed-by: Shuai Xue <xueshuai@linux.alibaba.com>
>---
> drivers/iommu/intel/dmar.c | 3 +--
> 1 file changed, 1 insertion(+), 2 deletions(-)
>
>diff --git a/drivers/iommu/intel/dmar.c b/drivers/iommu/intel/dmar.c
>index d68c06025cac..69222dbd2af0 100644
>--- a/drivers/iommu/intel/dmar.c
>+++ b/drivers/iommu/intel/dmar.c
>@@ -1314,7 +1314,6 @@ static int qi_check_fault(struct intel_iommu *iommu, int index, int wait_index)
> 	if (fault & DMA_FSTS_ITE) {
> 		head = readl(iommu->reg + DMAR_IQH_REG);
> 		head = ((head >> shift) - 1 + QI_LENGTH) % QI_LENGTH;
>-		head |= 1;
> 		tail = readl(iommu->reg + DMAR_IQT_REG);
> 		tail = ((tail >> shift) - 1 + QI_LENGTH) % QI_LENGTH;
>
>@@ -1331,7 +1330,7 @@ static int qi_check_fault(struct intel_iommu *iommu, int index, int wait_index)
> 		do {
> 			if (qi->desc_status[head] == QI_IN_USE)
> 				qi->desc_status[head] = QI_ABORT;
>-			head = (head - 2 + QI_LENGTH) % QI_LENGTH;
>+			head = (head - 1 + QI_LENGTH) % QI_LENGTH;
> 		} while (head != tail);
>
> 		/*
>-- 
>2.43.7
>
>

Reviewed-by: Samiullah Khawaja <skhawaja@google.com>

  reply	other threads:[~2026-03-06 19:42 UTC|newest]

Thread overview: 7+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-03-06 10:15 [PATCH v3] iommu/vt-d: fix intel iommu iotlb sync hardlockup and retry Guanghui Feng
2026-03-06 19:42 ` Samiullah Khawaja [this message]
2026-03-09  9:05 ` guanghuifeng
2026-03-10  6:02   ` Baolu Lu
2026-03-10  6:46     ` Baolu Lu
2026-03-10 12:59 ` Shuai Xue
2026-03-11  2:18   ` Baolu Lu

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=aast-ad3ZmSWh2Pl@google.com \
    --to=skhawaja@google.com \
    --cc=baolu.lu@linux.intel.com \
    --cc=dwmw2@infradead.org \
    --cc=guanghuifeng@linux.alibaba.com \
    --cc=iommu@lists.linux.dev \
    --cc=joro@8bytes.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=robin.murphy@arm.com \
    --cc=will@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox