From: Pranjal Shrivastava <praan@google.com>
To: iommu@lists.linux.dev
Cc: Will Deacon <will@kernel.org>, Joerg Roedel <joro@8bytes.org>,
Robin Murphy <robin.murphy@arm.com>,
Jason Gunthorpe <jgg@ziepe.ca>,
Mostafa Saleh <smostafa@google.com>,
Nicolin Chen <nicolinc@nvidia.com>,
Daniel Mentz <danielmentz@google.com>,
Ashish Mhetre <amhetre@nvidia.com>,
Pranjal Shrivastava <praan@google.com>
Subject: [PATCH v6 03/10] iommu/tegra241-cmdqv: Add a helper to drain VCMDQs
Date: Tue, 14 Apr 2026 19:46:55 +0000 [thread overview]
Message-ID: <20260414194702.1229094-4-praan@google.com> (raw)
In-Reply-To: <20260414194702.1229094-1-praan@google.com>
The tegra241-cmdqv driver supports vCMDQs which need to be drained
before suspending the SMMU. The current driver implementation only uses
VINTF0 for vCMDQs owned by the kernel which need to be drained. Add a
helper that drains all the enabled vCMDQs under VINTF0.
Add another function ptr to arm_smmu_impl_ops to drain implementation
specified queues and call it within `arm_smmu_drain_queues`.
Reviewed-by: Nicolin Chen <nicolinc@nvidia.com>
Signed-off-by: Pranjal Shrivastava <praan@google.com>
---
drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c | 7 +++++
drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h | 1 +
.../iommu/arm/arm-smmu-v3/tegra241-cmdqv.c | 27 +++++++++++++++++++
3 files changed, 35 insertions(+)
diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
index 088a7d1c002b..e865a8aa2210 100644
--- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
+++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
@@ -1024,6 +1024,13 @@ static int arm_smmu_drain_queues(struct arm_smmu_device *smmu)
*/
ret = arm_smmu_queue_poll_until_empty(smmu, &smmu->cmdq.q);
+ if (ret)
+ goto out;
+
+ /* Drain all implementation-specific queues */
+ if (smmu->impl_ops && smmu->impl_ops->drain_queues)
+ ret = smmu->impl_ops->drain_queues(smmu);
+out:
return ret;
}
diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h
index 47799d58d10c..001734bc3c7d 100644
--- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h
+++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h
@@ -818,6 +818,7 @@ struct arm_smmu_impl_ops {
size_t (*get_viommu_size)(enum iommu_viommu_type viommu_type);
int (*vsmmu_init)(struct arm_vsmmu *vsmmu,
const struct iommu_user_data *user_data);
+ int (*drain_queues)(struct arm_smmu_device *smmu);
};
/* An SMMUv3 instance */
diff --git a/drivers/iommu/arm/arm-smmu-v3/tegra241-cmdqv.c b/drivers/iommu/arm/arm-smmu-v3/tegra241-cmdqv.c
index 83f6e9f6c51d..b3ed2149f72b 100644
--- a/drivers/iommu/arm/arm-smmu-v3/tegra241-cmdqv.c
+++ b/drivers/iommu/arm/arm-smmu-v3/tegra241-cmdqv.c
@@ -414,6 +414,32 @@ tegra241_cmdqv_get_cmdq(struct arm_smmu_device *smmu,
return &vcmdq->cmdq;
}
+static int tegra241_cmdqv_drain_vintf0_lvcmdqs(struct arm_smmu_device *smmu)
+{
+ struct tegra241_cmdqv *cmdqv =
+ container_of(smmu, struct tegra241_cmdqv, smmu);
+ struct tegra241_vintf *vintf = cmdqv->vintfs[0];
+ int ret = 0;
+ u16 lidx;
+
+ /* Kernel only uses VINTF0. Return if it's disabled */
+ if (!READ_ONCE(vintf->enabled))
+ return 0;
+
+ for (lidx = 0; lidx < cmdqv->num_lvcmdqs_per_vintf; lidx++) {
+ struct tegra241_vcmdq *vcmdq = vintf->lvcmdqs[lidx];
+
+ if (!vcmdq || !READ_ONCE(vcmdq->enabled))
+ continue;
+
+ ret = arm_smmu_queue_poll_until_empty(smmu, &vcmdq->cmdq.q);
+ if (ret)
+ break;
+ }
+
+ return ret;
+}
+
/* HW Reset Functions */
/*
@@ -845,6 +871,7 @@ static struct arm_smmu_impl_ops tegra241_cmdqv_impl_ops = {
.get_secondary_cmdq = tegra241_cmdqv_get_cmdq,
.device_reset = tegra241_cmdqv_hw_reset,
.device_remove = tegra241_cmdqv_remove,
+ .drain_queues = tegra241_cmdqv_drain_vintf0_lvcmdqs,
/* For user-space use */
.hw_info = tegra241_cmdqv_hw_info,
.get_viommu_size = tegra241_cmdqv_get_vintf_size,
--
2.54.0.rc0.605.g598a273b03-goog
next prev parent reply other threads:[~2026-04-14 19:47 UTC|newest]
Thread overview: 11+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-04-14 19:46 [PATCH v6 00/10] iommu/arm-smmu-v3: Implement Runtime/System Sleep ops Pranjal Shrivastava
2026-04-14 19:46 ` [PATCH v6 01/10] iommu/arm-smmu-v3: Refactor arm_smmu_setup_irqs Pranjal Shrivastava
2026-04-14 19:46 ` [PATCH v6 02/10] iommu/arm-smmu-v3: Add a helper to drain cmd queues Pranjal Shrivastava
2026-04-14 19:46 ` Pranjal Shrivastava [this message]
2026-04-14 19:46 ` [PATCH v6 04/10] iommu/tegra241-cmdqv: Restore PROD and CONS after resume Pranjal Shrivastava
2026-04-14 19:46 ` [PATCH v6 05/10] iommu/arm-smmu-v3: Cache and restore MSI config Pranjal Shrivastava
2026-04-14 19:46 ` [PATCH v6 06/10] iommu/arm-smmu-v3: Add CMDQ_PROD_STOP_FLAG to gate CMDQ submissions Pranjal Shrivastava
2026-04-14 19:46 ` [PATCH v6 07/10] iommu/arm-smmu-v3: Implement pm_runtime & system sleep ops Pranjal Shrivastava
2026-04-14 19:47 ` [PATCH v6 08/10] iommu/arm-smmu-v3: Handle gerror during suspend Pranjal Shrivastava
2026-04-14 19:47 ` [PATCH v6 09/10] iommu/arm-smmu-v3: Enable pm_runtime and setup devlinks Pranjal Shrivastava
2026-04-14 19:47 ` [PATCH v6 10/10] iommu/arm-smmu-v3: Invoke pm_runtime before hw access Pranjal Shrivastava
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20260414194702.1229094-4-praan@google.com \
--to=praan@google.com \
--cc=amhetre@nvidia.com \
--cc=danielmentz@google.com \
--cc=iommu@lists.linux.dev \
--cc=jgg@ziepe.ca \
--cc=joro@8bytes.org \
--cc=nicolinc@nvidia.com \
--cc=robin.murphy@arm.com \
--cc=smostafa@google.com \
--cc=will@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox