* [RFC PATCH v2 1/9] iommu/arm-smmu-v3: group attached devices by smmu
2023-08-22 10:56 [RFC PATCH v2 0/9] Install domain onto multiple smmus Michael Shavit
@ 2023-08-22 10:56 ` Michael Shavit
2023-08-22 12:49 ` Jason Gunthorpe
2023-08-22 10:56 ` [RFC PATCH v2 2/9] iommu/arm-smmu-v3-sva: Move SVA optimization into arm_smmu_tlb_inv_range_asid Michael Shavit
` (8 subsequent siblings)
9 siblings, 1 reply; 15+ messages in thread
From: Michael Shavit @ 2023-08-22 10:56 UTC (permalink / raw)
To: iommu, linux-arm-kernel, linux-kernel
Cc: nicolinc, tina.zhang, jean-philippe, will, robin.murphy, jgg,
Michael Shavit, Dawei Li, Jason Gunthorpe, Joerg Roedel,
Kirill A. Shutemov, Lu Baolu, Mark Brown
Always insert a new master in the devices_list besides other masters
that belong to the same smmu.
This allows code to batch commands by SMMU when iterating over masters
that a domain is attached to.
Signed-off-by: Michael Shavit <mshavit@google.com>
---
Changes in v2:
- New commit
drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c | 22 ++++++++++++++++++---
1 file changed, 19 insertions(+), 3 deletions(-)
diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
index f17704c35858d..37b9223c145ba 100644
--- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
+++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
@@ -2382,6 +2382,24 @@ static void arm_smmu_detach_dev(struct arm_smmu_master *master)
arm_smmu_write_ctx_desc(master, 0, NULL);
}
+static void arm_smmu_domain_device_list_add(struct arm_smmu_domain *smmu_domain,
+ struct arm_smmu_master *master)
+{
+ struct arm_smmu_master *iter;
+ unsigned long flags;
+
+ spin_lock_irqsave(&smmu_domain->devices_lock, flags);
+ if (list_empty(&smmu_domain->devices))
+ list_add(&master->domain_head, &smmu_domain->devices);
+ else {
+ list_for_each_entry(iter, &smmu_domain->devices, domain_head)
+ if (iter->smmu == master->smmu)
+ break;
+ list_add(&master->domain_head, &iter->domain_head);
+ }
+ spin_unlock_irqrestore(&smmu_domain->devices_lock, flags);
+}
+
static int arm_smmu_attach_dev(struct iommu_domain *domain, struct device *dev)
{
int ret = 0;
@@ -2435,9 +2453,7 @@ static int arm_smmu_attach_dev(struct iommu_domain *domain, struct device *dev)
if (smmu_domain->stage != ARM_SMMU_DOMAIN_BYPASS)
master->ats_enabled = arm_smmu_ats_supported(master);
- spin_lock_irqsave(&smmu_domain->devices_lock, flags);
- list_add(&master->domain_head, &smmu_domain->devices);
- spin_unlock_irqrestore(&smmu_domain->devices_lock, flags);
+ arm_smmu_domain_device_list_add(smmu_domain, master);
if (smmu_domain->stage == ARM_SMMU_DOMAIN_S1) {
if (!master->cd_table.cdtab) {
--
2.42.0.rc1.204.g551eb34607-goog
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 15+ messages in thread* Re: [RFC PATCH v2 1/9] iommu/arm-smmu-v3: group attached devices by smmu
2023-08-22 10:56 ` [RFC PATCH v2 1/9] iommu/arm-smmu-v3: group attached devices by smmu Michael Shavit
@ 2023-08-22 12:49 ` Jason Gunthorpe
0 siblings, 0 replies; 15+ messages in thread
From: Jason Gunthorpe @ 2023-08-22 12:49 UTC (permalink / raw)
To: Michael Shavit
Cc: iommu, linux-arm-kernel, linux-kernel, nicolinc, tina.zhang,
jean-philippe, will, robin.murphy, Dawei Li, Joerg Roedel,
Kirill A. Shutemov, Lu Baolu, Mark Brown
On Tue, Aug 22, 2023 at 06:56:57PM +0800, Michael Shavit wrote:
> Always insert a new master in the devices_list besides other masters
> that belong to the same smmu.
> This allows code to batch commands by SMMU when iterating over masters
> that a domain is attached to.
>
> Signed-off-by: Michael Shavit <mshavit@google.com>
> ---
>
> Changes in v2:
> - New commit
>
> drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c | 22 ++++++++++++++++++---
> 1 file changed, 19 insertions(+), 3 deletions(-)
>
> diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
> index f17704c35858d..37b9223c145ba 100644
> --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
> +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
> @@ -2382,6 +2382,24 @@ static void arm_smmu_detach_dev(struct arm_smmu_master *master)
> arm_smmu_write_ctx_desc(master, 0, NULL);
> }
>
> +static void arm_smmu_domain_device_list_add(struct arm_smmu_domain *smmu_domain,
> + struct arm_smmu_master *master)
> +{
> + struct arm_smmu_master *iter;
> + unsigned long flags;
> +
> + spin_lock_irqsave(&smmu_domain->devices_lock, flags);
> + if (list_empty(&smmu_domain->devices))
> + list_add(&master->domain_head, &smmu_domain->devices);
> + else {
> + list_for_each_entry(iter, &smmu_domain->devices, domain_head)
> + if (iter->smmu == master->smmu)
> + break;
> + list_add(&master->domain_head, &iter->domain_head);
> + }
IIRC you are not supposed to touch iter after the list_for_each. Like this:
list_for_each_entry(iter, &smmu_domain->devices, domain_head) {
if (iter->smmu == master->smmu) {
list_add(&master->domain_head, iter->domain_head);
goto out;
}
}
list_add(&master->domain_head, &smmu_domain->devices);
out:
spin_unlock_irqrestore(&smmu_domain->devices_lock, flags);
Jason
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 15+ messages in thread
* [RFC PATCH v2 2/9] iommu/arm-smmu-v3-sva: Move SVA optimization into arm_smmu_tlb_inv_range_asid
2023-08-22 10:56 [RFC PATCH v2 0/9] Install domain onto multiple smmus Michael Shavit
2023-08-22 10:56 ` [RFC PATCH v2 1/9] iommu/arm-smmu-v3: group attached devices by smmu Michael Shavit
@ 2023-08-22 10:56 ` Michael Shavit
2023-08-22 10:56 ` [RFC PATCH v2 3/9] iommu/arm-smmu-v3: Issue invalidations commands to multiple SMMUs Michael Shavit
` (7 subsequent siblings)
9 siblings, 0 replies; 15+ messages in thread
From: Michael Shavit @ 2023-08-22 10:56 UTC (permalink / raw)
To: iommu, linux-arm-kernel, linux-kernel
Cc: nicolinc, tina.zhang, jean-philippe, will, robin.murphy, jgg,
Michael Shavit, Dawei Li, Jason Gunthorpe, Joerg Roedel,
Kirill A. Shutemov, Lu Baolu, Mark Brown
This will allow the optimization to be decided on a per SMMU basis when
arm_smmu_tlb_inv_range_asid operates on multiple SMMUs.
Signed-off-by: Michael Shavit <mshavit@google.com>
---
Changes in v2:
- New commit
drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c | 5 ++---
drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c | 4 ++++
drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h | 1 +
3 files changed, 7 insertions(+), 3 deletions(-)
diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c
index 238ede8368d10..53f65a89a55f9 100644
--- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c
+++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c
@@ -230,9 +230,8 @@ static void arm_smmu_mm_invalidate_range(struct mmu_notifier *mn,
*/
size = end - start;
- if (!(smmu_domain->smmu->features & ARM_SMMU_FEAT_BTM))
- arm_smmu_tlb_inv_range_asid(start, size, smmu_mn->cd->asid,
- PAGE_SIZE, false, smmu_domain);
+ arm_smmu_tlb_inv_range_asid(start, size, smmu_mn->cd->asid,
+ PAGE_SIZE, false, true, smmu_domain);
arm_smmu_atc_inv_domain(smmu_domain, mm->pasid, start, size);
}
diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
index 37b9223c145ba..db4df9d6aef10 100644
--- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
+++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
@@ -1974,6 +1974,7 @@ static void arm_smmu_tlb_inv_range_domain(unsigned long iova, size_t size,
void arm_smmu_tlb_inv_range_asid(unsigned long iova, size_t size, int asid,
size_t granule, bool leaf,
+ bool skip_btm_capable_devices,
struct arm_smmu_domain *smmu_domain)
{
struct arm_smmu_cmdq_ent cmd = {
@@ -1985,6 +1986,9 @@ void arm_smmu_tlb_inv_range_asid(unsigned long iova, size_t size, int asid,
},
};
+ if (skip_btm_capable_devices &&
+ smmu_domain->smmu->features & ARM_SMMU_FEAT_BTM)
+ return;
__arm_smmu_tlb_inv_range(&cmd, iova, size, granule, smmu_domain);
}
diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h
index 83d2790b701e7..05599914eb0a0 100644
--- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h
+++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h
@@ -751,6 +751,7 @@ int arm_smmu_write_ctx_desc(struct arm_smmu_master *smmu_master, int ssid,
void arm_smmu_tlb_inv_asid(struct arm_smmu_device *smmu, u16 asid);
void arm_smmu_tlb_inv_range_asid(unsigned long iova, size_t size, int asid,
size_t granule, bool leaf,
+ bool skip_btm_capable_devices,
struct arm_smmu_domain *smmu_domain);
bool arm_smmu_free_asid(struct arm_smmu_ctx_desc *cd);
int arm_smmu_atc_inv_domain(struct arm_smmu_domain *smmu_domain, int ssid,
--
2.42.0.rc1.204.g551eb34607-goog
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 15+ messages in thread* [RFC PATCH v2 3/9] iommu/arm-smmu-v3: Issue invalidations commands to multiple SMMUs
2023-08-22 10:56 [RFC PATCH v2 0/9] Install domain onto multiple smmus Michael Shavit
2023-08-22 10:56 ` [RFC PATCH v2 1/9] iommu/arm-smmu-v3: group attached devices by smmu Michael Shavit
2023-08-22 10:56 ` [RFC PATCH v2 2/9] iommu/arm-smmu-v3-sva: Move SVA optimization into arm_smmu_tlb_inv_range_asid Michael Shavit
@ 2023-08-22 10:56 ` Michael Shavit
2023-08-22 13:14 ` Jason Gunthorpe
2023-08-22 10:57 ` [RFC PATCH v2 4/9] iommu/arm-smmu-v3-sva: Allocate new ASID from installed_smmus Michael Shavit
` (6 subsequent siblings)
9 siblings, 1 reply; 15+ messages in thread
From: Michael Shavit @ 2023-08-22 10:56 UTC (permalink / raw)
To: iommu, linux-arm-kernel, linux-kernel
Cc: nicolinc, tina.zhang, jean-philippe, will, robin.murphy, jgg,
Michael Shavit, Dawei Li, Jason Gunthorpe, Joerg Roedel,
Kirill A. Shutemov, Lu Baolu, Mark Brown
Assume that devices in the smmu_domain->domain list that belong to the
same SMMU are adjacent to each other in the list.
Batch TLB/ATC invalidation commands for an smmu_domain by the SMMU
devices that the domain is installed to.
Signed-off-by: Michael Shavit <mshavit@google.com>
---
Changes in v2:
- Moved the ARM_SMMU_FEAT_BTM changes into a new prepatory commit
.../iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c | 6 +-
drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c | 134 +++++++++++++-----
drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h | 2 +-
3 files changed, 104 insertions(+), 38 deletions(-)
diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c
index 53f65a89a55f9..fe88a7880ad57 100644
--- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c
+++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c
@@ -112,7 +112,7 @@ arm_smmu_share_asid(struct mm_struct *mm, u16 asid)
arm_smmu_write_ctx_desc_devices(smmu_domain, 0, cd);
/* Invalidate TLB entries previously associated with that context */
- arm_smmu_tlb_inv_asid(smmu, asid);
+ arm_smmu_tlb_inv_asid(smmu_domain, asid);
xa_erase(&arm_smmu_asid_xa, asid);
return NULL;
@@ -252,7 +252,7 @@ static void arm_smmu_mm_release(struct mmu_notifier *mn, struct mm_struct *mm)
*/
arm_smmu_write_ctx_desc_devices(smmu_domain, mm->pasid, &quiet_cd);
- arm_smmu_tlb_inv_asid(smmu_domain->smmu, smmu_mn->cd->asid);
+ arm_smmu_tlb_inv_asid(smmu_domain, smmu_mn->cd->asid);
arm_smmu_atc_inv_domain(smmu_domain, mm->pasid, 0, 0);
smmu_mn->cleared = true;
@@ -340,7 +340,7 @@ static void arm_smmu_mmu_notifier_put(struct arm_smmu_mmu_notifier *smmu_mn)
* new TLB entry can have been formed.
*/
if (!smmu_mn->cleared) {
- arm_smmu_tlb_inv_asid(smmu_domain->smmu, cd->asid);
+ arm_smmu_tlb_inv_asid(smmu_domain, cd->asid);
arm_smmu_atc_inv_domain(smmu_domain, mm->pasid, 0, 0);
}
diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
index db4df9d6aef10..1d072fd38a2d6 100644
--- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
+++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
@@ -960,15 +960,28 @@ static int arm_smmu_page_response(struct device *dev,
}
/* Context descriptor manipulation functions */
-void arm_smmu_tlb_inv_asid(struct arm_smmu_device *smmu, u16 asid)
+void arm_smmu_tlb_inv_asid(struct arm_smmu_domain *smmu_domain, u16 asid)
{
+ struct arm_smmu_device *smmu = NULL;
+ struct arm_smmu_master *master;
struct arm_smmu_cmdq_ent cmd = {
- .opcode = smmu->features & ARM_SMMU_FEAT_E2H ?
- CMDQ_OP_TLBI_EL2_ASID : CMDQ_OP_TLBI_NH_ASID,
.tlbi.asid = asid,
};
+ unsigned long flags;
- arm_smmu_cmdq_issue_cmd_with_sync(smmu, &cmd);
+ spin_lock_irqsave(&smmu_domain->devices_lock, flags);
+ list_for_each_entry(master, &smmu_domain->devices,
+ domain_head) {
+ if (!smmu)
+ smmu = master->smmu;
+ if (smmu != master->smmu ||
+ list_is_last(&master->domain_head, &smmu_domain->devices)) {
+ cmd.opcode = smmu->features & ARM_SMMU_FEAT_E2H ?
+ CMDQ_OP_TLBI_EL2_ASID : CMDQ_OP_TLBI_NH_ASID,
+ arm_smmu_cmdq_issue_cmd_with_sync(smmu, &cmd);
+ }
+ }
+ spin_unlock_irqrestore(&smmu_domain->devices_lock, flags);
}
static void arm_smmu_sync_cd(struct arm_smmu_master *master,
@@ -1811,14 +1824,13 @@ int arm_smmu_atc_inv_domain(struct arm_smmu_domain *smmu_domain, int ssid,
unsigned long iova, size_t size)
{
int i;
+ int ret = 0;
unsigned long flags;
struct arm_smmu_cmdq_ent cmd;
+ struct arm_smmu_device *smmu = NULL;
struct arm_smmu_master *master;
struct arm_smmu_cmdq_batch cmds;
- if (!(smmu_domain->smmu->features & ARM_SMMU_FEAT_ATS))
- return 0;
-
/*
* Ensure that we've completed prior invalidation of the main TLBs
* before we read 'nr_ats_masters' in case of a concurrent call to
@@ -1839,28 +1851,56 @@ int arm_smmu_atc_inv_domain(struct arm_smmu_domain *smmu_domain, int ssid,
arm_smmu_atc_inv_to_cmd(ssid, iova, size, &cmd);
cmds.num = 0;
-
spin_lock_irqsave(&smmu_domain->devices_lock, flags);
list_for_each_entry(master, &smmu_domain->devices, domain_head) {
if (!master->ats_enabled)
continue;
+ if (!smmu)
+ smmu = master->smmu;
+ if (smmu != master->smmu ||
+ list_is_last(&master->domain_head, &smmu_domain->devices)) {
+ ret = arm_smmu_cmdq_batch_submit(smmu, &cmds);
+ if (ret)
+ break;
+ cmds.num = 0;
+ }
for (i = 0; i < master->num_streams; i++) {
cmd.atc.sid = master->streams[i].id;
- arm_smmu_cmdq_batch_add(smmu_domain->smmu, &cmds, &cmd);
+ arm_smmu_cmdq_batch_add(smmu, &cmds, &cmd);
}
}
spin_unlock_irqrestore(&smmu_domain->devices_lock, flags);
- return arm_smmu_cmdq_batch_submit(smmu_domain->smmu, &cmds);
+ return ret;
+}
+
+static void arm_smmu_tlb_inv_vmid(struct arm_smmu_domain *smmu_domain)
+{
+ struct arm_smmu_device *smmu = NULL;
+ struct arm_smmu_master *master;
+ struct arm_smmu_cmdq_ent cmd = {
+ .opcode = CMDQ_OP_TLBI_S12_VMALL,
+ .tlbi.vmid = smmu_domain->s2_cfg.vmid,
+ };
+ unsigned long flags;
+
+ spin_lock_irqsave(&smmu_domain->devices_lock, flags);
+ list_for_each_entry(master, &smmu_domain->devices,
+ domain_head) {
+ if (!smmu)
+ smmu = master->smmu;
+ if (smmu != master->smmu ||
+ list_is_last(&master->domain_head, &smmu_domain->devices))
+ arm_smmu_cmdq_issue_cmd_with_sync(smmu, &cmd);
+ }
+ spin_unlock_irqrestore(&smmu_domain->devices_lock, flags);
}
/* IO_PGTABLE API */
static void arm_smmu_tlb_inv_context(void *cookie)
{
struct arm_smmu_domain *smmu_domain = cookie;
- struct arm_smmu_device *smmu = smmu_domain->smmu;
- struct arm_smmu_cmdq_ent cmd;
/*
* NOTE: when io-pgtable is in non-strict mode, we may get here with
@@ -1870,11 +1910,9 @@ static void arm_smmu_tlb_inv_context(void *cookie)
* careful, 007.
*/
if (smmu_domain->stage == ARM_SMMU_DOMAIN_S1) {
- arm_smmu_tlb_inv_asid(smmu, smmu_domain->cd.asid);
+ arm_smmu_tlb_inv_asid(smmu_domain, smmu_domain->cd.asid);
} else {
- cmd.opcode = CMDQ_OP_TLBI_S12_VMALL;
- cmd.tlbi.vmid = smmu_domain->s2_cfg.vmid;
- arm_smmu_cmdq_issue_cmd_with_sync(smmu, &cmd);
+ arm_smmu_tlb_inv_vmid(smmu_domain);
}
arm_smmu_atc_inv_domain(smmu_domain, 0, 0, 0);
}
@@ -1882,9 +1920,9 @@ static void arm_smmu_tlb_inv_context(void *cookie)
static void __arm_smmu_tlb_inv_range(struct arm_smmu_cmdq_ent *cmd,
unsigned long iova, size_t size,
size_t granule,
- struct arm_smmu_domain *smmu_domain)
+ struct arm_smmu_domain *smmu_domain,
+ struct arm_smmu_device *smmu)
{
- struct arm_smmu_device *smmu = smmu_domain->smmu;
unsigned long end = iova + size, num_pages = 0, tg = 0;
size_t inv_range = granule;
struct arm_smmu_cmdq_batch cmds;
@@ -1949,21 +1987,36 @@ static void arm_smmu_tlb_inv_range_domain(unsigned long iova, size_t size,
size_t granule, bool leaf,
struct arm_smmu_domain *smmu_domain)
{
+ struct arm_smmu_device *smmu = NULL;
+ struct arm_smmu_master *master;
struct arm_smmu_cmdq_ent cmd = {
.tlbi = {
.leaf = leaf,
},
};
+ unsigned long flags;
- if (smmu_domain->stage == ARM_SMMU_DOMAIN_S1) {
- cmd.opcode = smmu_domain->smmu->features & ARM_SMMU_FEAT_E2H ?
- CMDQ_OP_TLBI_EL2_VA : CMDQ_OP_TLBI_NH_VA;
- cmd.tlbi.asid = smmu_domain->cd.asid;
- } else {
- cmd.opcode = CMDQ_OP_TLBI_S2_IPA;
- cmd.tlbi.vmid = smmu_domain->s2_cfg.vmid;
+ spin_lock_irqsave(&smmu_domain->devices_lock, flags);
+ list_for_each_entry(master, &smmu_domain->devices, domain_head) {
+ if (!smmu)
+ smmu = master->smmu;
+ if (smmu != master->smmu ||
+ list_is_last(&master->domain_head, &smmu_domain->devices)) {
+ if (smmu_domain->stage == ARM_SMMU_DOMAIN_S1) {
+ cmd.opcode = smmu->features &
+ ARM_SMMU_FEAT_E2H ?
+ CMDQ_OP_TLBI_EL2_VA :
+ CMDQ_OP_TLBI_NH_VA;
+ cmd.tlbi.asid = smmu_domain->cd.asid;
+ } else {
+ cmd.opcode = CMDQ_OP_TLBI_S2_IPA;
+ cmd.tlbi.vmid = smmu_domain->s2_cfg.vmid;
+ }
+ __arm_smmu_tlb_inv_range(&cmd, iova, size, granule,
+ smmu_domain, smmu);
+ }
}
- __arm_smmu_tlb_inv_range(&cmd, iova, size, granule, smmu_domain);
+ spin_unlock_irqrestore(&smmu_domain->devices_lock, flags);
/*
* Unfortunately, this can't be leaf-only since we may have
@@ -1977,19 +2030,33 @@ void arm_smmu_tlb_inv_range_asid(unsigned long iova, size_t size, int asid,
bool skip_btm_capable_devices,
struct arm_smmu_domain *smmu_domain)
{
+ struct arm_smmu_device *smmu = NULL;
+ struct arm_smmu_master *master;
struct arm_smmu_cmdq_ent cmd = {
- .opcode = smmu_domain->smmu->features & ARM_SMMU_FEAT_E2H ?
- CMDQ_OP_TLBI_EL2_VA : CMDQ_OP_TLBI_NH_VA,
.tlbi = {
.asid = asid,
.leaf = leaf,
},
};
+ unsigned long flags;
- if (skip_btm_capable_devices &&
- smmu_domain->smmu->features & ARM_SMMU_FEAT_BTM)
- return;
- __arm_smmu_tlb_inv_range(&cmd, iova, size, granule, smmu_domain);
+ spin_lock_irqsave(&smmu_domain->devices_lock, flags);
+ list_for_each_entry(master, &smmu_domain->devices, domain_head) {
+ if (!smmu)
+ smmu = master->smmu;
+ if (smmu != master->smmu ||
+ list_is_last(&master->domain_head, &smmu_domain->devices)) {
+ if (skip_btm_capable_devices &&
+ smmu->features & ARM_SMMU_FEAT_BTM)
+ continue;
+ cmd.opcode = smmu->features & ARM_SMMU_FEAT_E2H ?
+ CMDQ_OP_TLBI_EL2_VA :
+ CMDQ_OP_TLBI_NH_VA;
+ __arm_smmu_tlb_inv_range(&cmd, iova, size, granule,
+ smmu_domain, smmu);
+ }
+ }
+ spin_unlock_irqrestore(&smmu_domain->devices_lock, flags);
}
static void arm_smmu_tlb_inv_page_nosync(struct iommu_iotlb_gather *gather,
@@ -2523,8 +2590,7 @@ static void arm_smmu_flush_iotlb_all(struct iommu_domain *domain)
{
struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain);
- if (smmu_domain->smmu)
- arm_smmu_tlb_inv_context(smmu_domain);
+ arm_smmu_tlb_inv_context(smmu_domain);
}
static void arm_smmu_iotlb_sync(struct iommu_domain *domain,
diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h
index 05599914eb0a0..b0cf9c33e6bcd 100644
--- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h
+++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h
@@ -748,7 +748,7 @@ extern struct arm_smmu_ctx_desc quiet_cd;
int arm_smmu_write_ctx_desc(struct arm_smmu_master *smmu_master, int ssid,
struct arm_smmu_ctx_desc *cd);
-void arm_smmu_tlb_inv_asid(struct arm_smmu_device *smmu, u16 asid);
+void arm_smmu_tlb_inv_asid(struct arm_smmu_domain *smmu_domain, u16 asid);
void arm_smmu_tlb_inv_range_asid(unsigned long iova, size_t size, int asid,
size_t granule, bool leaf,
bool skip_btm_capable_devices,
--
2.42.0.rc1.204.g551eb34607-goog
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 15+ messages in thread* Re: [RFC PATCH v2 3/9] iommu/arm-smmu-v3: Issue invalidations commands to multiple SMMUs
2023-08-22 10:56 ` [RFC PATCH v2 3/9] iommu/arm-smmu-v3: Issue invalidations commands to multiple SMMUs Michael Shavit
@ 2023-08-22 13:14 ` Jason Gunthorpe
0 siblings, 0 replies; 15+ messages in thread
From: Jason Gunthorpe @ 2023-08-22 13:14 UTC (permalink / raw)
To: Michael Shavit
Cc: iommu, linux-arm-kernel, linux-kernel, nicolinc, tina.zhang,
jean-philippe, will, robin.murphy, Dawei Li, Joerg Roedel,
Kirill A. Shutemov, Lu Baolu, Mark Brown
On Tue, Aug 22, 2023 at 06:56:59PM +0800, Michael Shavit wrote:
> Assume that devices in the smmu_domain->domain list that belong to the
> same SMMU are adjacent to each other in the list.
> Batch TLB/ATC invalidation commands for an smmu_domain by the SMMU
> devices that the domain is installed to.
>
> Signed-off-by: Michael Shavit <mshavit@google.com>
> ---
>
> Changes in v2:
> - Moved the ARM_SMMU_FEAT_BTM changes into a new prepatory commit
>
> .../iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c | 6 +-
> drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c | 134 +++++++++++++-----
> drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h | 2 +-
> 3 files changed, 104 insertions(+), 38 deletions(-)
>
> diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c
> index 53f65a89a55f9..fe88a7880ad57 100644
> --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c
> +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c
> @@ -112,7 +112,7 @@ arm_smmu_share_asid(struct mm_struct *mm, u16 asid)
> arm_smmu_write_ctx_desc_devices(smmu_domain, 0, cd);
>
> /* Invalidate TLB entries previously associated with that context */
> - arm_smmu_tlb_inv_asid(smmu, asid);
> + arm_smmu_tlb_inv_asid(smmu_domain, asid);
>
> xa_erase(&arm_smmu_asid_xa, asid);
> return NULL;
> @@ -252,7 +252,7 @@ static void arm_smmu_mm_release(struct mmu_notifier *mn, struct mm_struct *mm)
> */
> arm_smmu_write_ctx_desc_devices(smmu_domain, mm->pasid, &quiet_cd);
>
> - arm_smmu_tlb_inv_asid(smmu_domain->smmu, smmu_mn->cd->asid);
> + arm_smmu_tlb_inv_asid(smmu_domain, smmu_mn->cd->asid);
> arm_smmu_atc_inv_domain(smmu_domain, mm->pasid, 0, 0);
>
> smmu_mn->cleared = true;
> @@ -340,7 +340,7 @@ static void arm_smmu_mmu_notifier_put(struct arm_smmu_mmu_notifier *smmu_mn)
> * new TLB entry can have been formed.
> */
> if (!smmu_mn->cleared) {
> - arm_smmu_tlb_inv_asid(smmu_domain->smmu, cd->asid);
> + arm_smmu_tlb_inv_asid(smmu_domain, cd->asid);
> arm_smmu_atc_inv_domain(smmu_domain, mm->pasid, 0, 0);
> }
>
> diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
> index db4df9d6aef10..1d072fd38a2d6 100644
> --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
> +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
> @@ -960,15 +960,28 @@ static int arm_smmu_page_response(struct device *dev,
> }
>
> /* Context descriptor manipulation functions */
> -void arm_smmu_tlb_inv_asid(struct arm_smmu_device *smmu, u16 asid)
> +void arm_smmu_tlb_inv_asid(struct arm_smmu_domain *smmu_domain, u16 asid)
> {
> + struct arm_smmu_device *smmu = NULL;
> + struct arm_smmu_master *master;
> struct arm_smmu_cmdq_ent cmd = {
> - .opcode = smmu->features & ARM_SMMU_FEAT_E2H ?
> - CMDQ_OP_TLBI_EL2_ASID : CMDQ_OP_TLBI_NH_ASID,
> .tlbi.asid = asid,
> };
> + unsigned long flags;
>
> - arm_smmu_cmdq_issue_cmd_with_sync(smmu, &cmd);
> + spin_lock_irqsave(&smmu_domain->devices_lock, flags);
> + list_for_each_entry(master, &smmu_domain->devices,
> + domain_head) {
> + if (!smmu)
> + smmu = master->smmu;
> + if (smmu != master->smmu ||
> + list_is_last(&master->domain_head, &smmu_domain->devices)) {
Finding the end of the list seems too complicated, just:
struct arm_smmu_device *invalidated_smmu = NULL;
list_for_each_entry(master, &smmu_domain->devices,
domain_head) {
if (master->smmu == invalidated_smmu)
continue;
cmd.opcode = smmu->features & ARM_SMMU_FEAT_E2H ?
CMDQ_OP_TLBI_EL2_ASID : CMDQ_OP_TLBI_NH_ASID,
arm_smmu_cmdq_issue_cmd_with_sync(master->smmu, &cmd);
invalidated_smmu = master->smmu;
}
> @@ -1839,28 +1851,56 @@ int arm_smmu_atc_inv_domain(struct arm_smmu_domain *smmu_domain, int ssid,
> arm_smmu_atc_inv_to_cmd(ssid, iova, size, &cmd);
>
> cmds.num = 0;
> -
> spin_lock_irqsave(&smmu_domain->devices_lock, flags);
> list_for_each_entry(master, &smmu_domain->devices, domain_head) {
> if (!master->ats_enabled)
> continue;
> + if (!smmu)
> + smmu = master->smmu;
> + if (smmu != master->smmu ||
> + list_is_last(&master->domain_head, &smmu_domain->devices)) {
> + ret = arm_smmu_cmdq_batch_submit(smmu, &cmds);
> + if (ret)
> + break;
> + cmds.num = 0;
> + }
>
> for (i = 0; i < master->num_streams; i++) {
> cmd.atc.sid = master->streams[i].id;
> - arm_smmu_cmdq_batch_add(smmu_domain->smmu, &cmds, &cmd);
> + arm_smmu_cmdq_batch_add(smmu, &cmds, &cmd);
> }
> }
Doesn't the IOTLB invalidate have to come before the ATC invalidate?
So again, use the pattern as above?
> spin_unlock_irqrestore(&smmu_domain->devices_lock, flags);
>
> - return arm_smmu_cmdq_batch_submit(smmu_domain->smmu, &cmds);
> + return ret;
> +}
> +
> +static void arm_smmu_tlb_inv_vmid(struct arm_smmu_domain *smmu_domain)
> +{
> + struct arm_smmu_device *smmu = NULL;
> + struct arm_smmu_master *master;
> + struct arm_smmu_cmdq_ent cmd = {
> + .opcode = CMDQ_OP_TLBI_S12_VMALL,
> + .tlbi.vmid = smmu_domain->s2_cfg.vmid,
> + };
> + unsigned long flags;
> +
> + spin_lock_irqsave(&smmu_domain->devices_lock, flags);
> + list_for_each_entry(master, &smmu_domain->devices,
> + domain_head) {
> + if (!smmu)
> + smmu = master->smmu;
> + if (smmu != master->smmu ||
> + list_is_last(&master->domain_head, &smmu_domain->devices))
> + arm_smmu_cmdq_issue_cmd_with_sync(smmu, &cmd);
> + }
> + spin_unlock_irqrestore(&smmu_domain->devices_lock, flags);
> }
I count three of these, so a macro helper is probably a good
idea. Something approx like:
static struct arm_smmu_master *smmu_next_entry(struct arm_smmu_master *pos,
struct arm_smmu_domain *domain)
{
struct arm_smmu *smmu = pos->smmu;
do {
pos = list_next_entry(pos, domain_head);
} while (!list_entry_is_head(pos, domain->devices, domain_head) &&
pos->smmu == smmu);
return pos;
}
#define for_each_smmu(pos, domain, smmu) \
for (pos = list_first_entry((domain)->devices, struct arm_smmu_master, \
domain_head), \
smmu = (pos)->smmu; \
!list_entry_is_head(pos, (domain)->devices, domain_head); \
pos = smmu_next_entry(pos, domain), smmu = (pos)->smmu)
> @@ -1949,21 +1987,36 @@ static void arm_smmu_tlb_inv_range_domain(unsigned long iova, size_t size,
> size_t granule, bool leaf,
> struct arm_smmu_domain *smmu_domain)
> {
> + struct arm_smmu_device *smmu = NULL;
> + struct arm_smmu_master *master;
> struct arm_smmu_cmdq_ent cmd = {
> .tlbi = {
> .leaf = leaf,
> },
> };
> + unsigned long flags;
>
> - if (smmu_domain->stage == ARM_SMMU_DOMAIN_S1) {
> - cmd.opcode = smmu_domain->smmu->features & ARM_SMMU_FEAT_E2H ?
> - CMDQ_OP_TLBI_EL2_VA : CMDQ_OP_TLBI_NH_VA;
> - cmd.tlbi.asid = smmu_domain->cd.asid;
> - } else {
> - cmd.opcode = CMDQ_OP_TLBI_S2_IPA;
> - cmd.tlbi.vmid = smmu_domain->s2_cfg.vmid;
> + spin_lock_irqsave(&smmu_domain->devices_lock, flags);
> + list_for_each_entry(master, &smmu_domain->devices, domain_head) {
> + if (!smmu)
> + smmu = master->smmu;
> + if (smmu != master->smmu ||
> + list_is_last(&master->domain_head, &smmu_domain->devices)) {
> + if (smmu_domain->stage == ARM_SMMU_DOMAIN_S1) {
> + cmd.opcode = smmu->features &
> + ARM_SMMU_FEAT_E2H ?
> + CMDQ_OP_TLBI_EL2_VA :
> + CMDQ_OP_TLBI_NH_VA;
> + cmd.tlbi.asid = smmu_domain->cd.asid;
> + } else {
> + cmd.opcode = CMDQ_OP_TLBI_S2_IPA;
> + cmd.tlbi.vmid = smmu_domain->s2_cfg.vmid;
> + }
These calculations based on smmu domain shouldn't be in the loop, the
smmu_domain doesn't change.
> - __arm_smmu_tlb_inv_range(&cmd, iova, size, granule, smmu_domain);
> + spin_lock_irqsave(&smmu_domain->devices_lock, flags);
> + list_for_each_entry(master, &smmu_domain->devices, domain_head) {
> + if (!smmu)
> + smmu = master->smmu;
> + if (smmu != master->smmu ||
> + list_is_last(&master->domain_head, &smmu_domain->devices)) {
> + if (skip_btm_capable_devices &&
> + smmu->features & ARM_SMMU_FEAT_BTM)
> + continue;
> + cmd.opcode = smmu->features & ARM_SMMU_FEAT_E2H ?
> + CMDQ_OP_TLBI_EL2_VA :
> + CMDQ_OP_TLBI_NH_VA;
There are 3 places doing this if, maybe it should be in a wrapper of
__arm_smmu_tlb_inv_range ?
Jason
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 15+ messages in thread
* [RFC PATCH v2 4/9] iommu/arm-smmu-v3-sva: Allocate new ASID from installed_smmus
2023-08-22 10:56 [RFC PATCH v2 0/9] Install domain onto multiple smmus Michael Shavit
` (2 preceding siblings ...)
2023-08-22 10:56 ` [RFC PATCH v2 3/9] iommu/arm-smmu-v3: Issue invalidations commands to multiple SMMUs Michael Shavit
@ 2023-08-22 10:57 ` Michael Shavit
2023-08-22 13:19 ` Jason Gunthorpe
2023-08-22 10:57 ` [RFC PATCH v2 5/9] iommu/arm-smmu-v3: Alloc vmid from global pool Michael Shavit
` (5 subsequent siblings)
9 siblings, 1 reply; 15+ messages in thread
From: Michael Shavit @ 2023-08-22 10:57 UTC (permalink / raw)
To: iommu, linux-arm-kernel, linux-kernel
Cc: nicolinc, tina.zhang, jean-philippe, will, robin.murphy, jgg,
Michael Shavit, Dawei Li, Jason Gunthorpe, Joerg Roedel,
Kirill A. Shutemov, Lu Baolu, Mark Brown
Pick an ASID that is within the supported range of all SMMUs that the
domain is installed to.
Signed-off-by: Michael Shavit <mshavit@google.com>
---
(no changes since v1)
.../iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c | 23 +++++++++++++++----
1 file changed, 19 insertions(+), 4 deletions(-)
diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c
index fe88a7880ad57..92d2f8c4e90a8 100644
--- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c
+++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c
@@ -66,6 +66,20 @@ static int arm_smmu_write_ctx_desc_devices(struct arm_smmu_domain *smmu_domain,
return ret;
}
+static u32 arm_smmu_domain_max_asid_bits(struct arm_smmu_domain *smmu_domain)
+{
+ struct arm_smmu_master *master;
+ unsigned long flags;
+ u32 asid_bits = 16;
+
+ spin_lock_irqsave(&smmu_domain->devices_lock, flags);
+ list_for_each_entry(master, &smmu_domain->devices,
+ domain_head)
+ asid_bits = min(asid_bits, master->smmu->asid_bits);
+ spin_unlock_irqrestore(&smmu_domain->devices_lock, flags);
+ return asid_bits;
+}
+
/*
* Check if the CPU ASID is available on the SMMU side. If a private context
* descriptor is using it, try to replace it.
@@ -76,7 +90,6 @@ arm_smmu_share_asid(struct mm_struct *mm, u16 asid)
int ret;
u32 new_asid;
struct arm_smmu_ctx_desc *cd;
- struct arm_smmu_device *smmu;
struct arm_smmu_domain *smmu_domain;
cd = xa_load(&arm_smmu_asid_xa, asid);
@@ -92,10 +105,12 @@ arm_smmu_share_asid(struct mm_struct *mm, u16 asid)
}
smmu_domain = container_of(cd, struct arm_smmu_domain, cd);
- smmu = smmu_domain->smmu;
- ret = xa_alloc(&arm_smmu_asid_xa, &new_asid, cd,
- XA_LIMIT(1, (1 << smmu->asid_bits) - 1), GFP_KERNEL);
+ ret = xa_alloc(
+ &arm_smmu_asid_xa, &new_asid, cd,
+ XA_LIMIT(1,
+ (1 << arm_smmu_domain_max_asid_bits(smmu_domain)) - 1),
+ GFP_KERNEL);
if (ret)
return ERR_PTR(-ENOSPC);
/*
--
2.42.0.rc1.204.g551eb34607-goog
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 15+ messages in thread* Re: [RFC PATCH v2 4/9] iommu/arm-smmu-v3-sva: Allocate new ASID from installed_smmus
2023-08-22 10:57 ` [RFC PATCH v2 4/9] iommu/arm-smmu-v3-sva: Allocate new ASID from installed_smmus Michael Shavit
@ 2023-08-22 13:19 ` Jason Gunthorpe
2023-08-23 7:26 ` Michael Shavit
0 siblings, 1 reply; 15+ messages in thread
From: Jason Gunthorpe @ 2023-08-22 13:19 UTC (permalink / raw)
To: Michael Shavit
Cc: iommu, linux-arm-kernel, linux-kernel, nicolinc, tina.zhang,
jean-philippe, will, robin.murphy, Dawei Li, Joerg Roedel,
Kirill A. Shutemov, Lu Baolu, Mark Brown
On Tue, Aug 22, 2023 at 06:57:00PM +0800, Michael Shavit wrote:
> Pick an ASID that is within the supported range of all SMMUs that the
> domain is installed to.
>
> Signed-off-by: Michael Shavit <mshavit@google.com>
> ---
>
> (no changes since v1)
>
> .../iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c | 23 +++++++++++++++----
> 1 file changed, 19 insertions(+), 4 deletions(-)
>
> diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c
> index fe88a7880ad57..92d2f8c4e90a8 100644
> --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c
> +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c
> @@ -66,6 +66,20 @@ static int arm_smmu_write_ctx_desc_devices(struct arm_smmu_domain *smmu_domain,
> return ret;
> }
>
> +static u32 arm_smmu_domain_max_asid_bits(struct arm_smmu_domain *smmu_domain)
> +{
> + struct arm_smmu_master *master;
> + unsigned long flags;
> + u32 asid_bits = 16;
> +
> + spin_lock_irqsave(&smmu_domain->devices_lock, flags);
> + list_for_each_entry(master, &smmu_domain->devices,
> + domain_head)
> + asid_bits = min(asid_bits, master->smmu->asid_bits);
> + spin_unlock_irqrestore(&smmu_domain->devices_lock, flags);
> + return asid_bits;
> +}
I still don't like this, it is not locked properly. You release the
devices_lock which means the max_asid could change before we get to
arm_smmu_write_ctx_desc()
If you want to take this shortcut temporarily then a global max_asid
is probably a better plan. Change it to a per-master allocation later
to remove that.
Jason
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 15+ messages in thread* Re: [RFC PATCH v2 4/9] iommu/arm-smmu-v3-sva: Allocate new ASID from installed_smmus
2023-08-22 13:19 ` Jason Gunthorpe
@ 2023-08-23 7:26 ` Michael Shavit
0 siblings, 0 replies; 15+ messages in thread
From: Michael Shavit @ 2023-08-23 7:26 UTC (permalink / raw)
To: Jason Gunthorpe
Cc: iommu, linux-arm-kernel, linux-kernel, nicolinc, tina.zhang,
jean-philippe, will, robin.murphy, Dawei Li, Joerg Roedel,
Kirill A. Shutemov, Lu Baolu, Mark Brown
On Tue, Aug 22, 2023 at 9:19 PM Jason Gunthorpe <jgg@nvidia.com> wrote:
>
> On Tue, Aug 22, 2023 at 06:57:00PM +0800, Michael Shavit wrote:
> > Pick an ASID that is within the supported range of all SMMUs that the
> > domain is installed to.
> >
> > Signed-off-by: Michael Shavit <mshavit@google.com>
> > ---
> >
> > (no changes since v1)
> >
> > .../iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c | 23 +++++++++++++++----
> > 1 file changed, 19 insertions(+), 4 deletions(-)
> >
> > diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c
> > index fe88a7880ad57..92d2f8c4e90a8 100644
> > --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c
> > +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c
> > @@ -66,6 +66,20 @@ static int arm_smmu_write_ctx_desc_devices(struct arm_smmu_domain *smmu_domain,
> > return ret;
> > }
> >
> > +static u32 arm_smmu_domain_max_asid_bits(struct arm_smmu_domain *smmu_domain)
> > +{
> > + struct arm_smmu_master *master;
> > + unsigned long flags;
> > + u32 asid_bits = 16;
> > +
> > + spin_lock_irqsave(&smmu_domain->devices_lock, flags);
> > + list_for_each_entry(master, &smmu_domain->devices,
> > + domain_head)
> > + asid_bits = min(asid_bits, master->smmu->asid_bits);
> > + spin_unlock_irqrestore(&smmu_domain->devices_lock, flags);
> > + return asid_bits;
> > +}
>
> I still don't like this, it is not locked properly. You release the
> devices_lock which means the max_asid could change before we get to
> arm_smmu_write_ctx_desc()
Good point.
> If you want to take this shortcut temporarily then a global max_asid
> is probably a better plan. Change it to a per-master allocation later
> to remove that.
Two options there:
1. When allocating a new ASID in arm_smmu_share_asid, limit ourselves
to 8-bit-wide ASIDs regardless of whether all the installed SMMUs
support 16bit ASIDs.
2. In addition, also use a maximum 8-bit-wide ASID when allocating
asids in arm_smmu_domain_finalise_s1.
The first one has minimal impact since arm_smmu_share_asid is
supposedly rare, and is a simple replacement for this patch.
The second one is more intrusive since we'd be limiting the number of
dma/unmanaged domains to a fairly small number, but it has the
advantage of allowing those domains to always successfully attach to
masters belonging to SMMUs with different asid_bits values (without
having to re-allocate a new ASID for the domain arm_smmu_share_asid
style). Where-as this series simply fails to attach the domain in such
scenarios.
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 15+ messages in thread
* [RFC PATCH v2 5/9] iommu/arm-smmu-v3: Alloc vmid from global pool
2023-08-22 10:56 [RFC PATCH v2 0/9] Install domain onto multiple smmus Michael Shavit
` (3 preceding siblings ...)
2023-08-22 10:57 ` [RFC PATCH v2 4/9] iommu/arm-smmu-v3-sva: Allocate new ASID from installed_smmus Michael Shavit
@ 2023-08-22 10:57 ` Michael Shavit
2023-08-22 10:57 ` [RFC PATCH v2 6/9] iommu/arm-smmu-v3: check smmu compatibility on attach Michael Shavit
` (4 subsequent siblings)
9 siblings, 0 replies; 15+ messages in thread
From: Michael Shavit @ 2023-08-22 10:57 UTC (permalink / raw)
To: iommu, linux-arm-kernel, linux-kernel
Cc: nicolinc, tina.zhang, jean-philippe, will, robin.murphy, jgg,
Michael Shavit, Dawei Li, Jason Gunthorpe, Joerg Roedel,
Kirill A. Shutemov, Lu Baolu, Mark Brown
Consistent with how ASIDs are allocated, allocate VMIds from a global
pool instead of a per-SMMU pool. This allows the domain to be attached
onto multiple SMMUs.
Signed-off-by: Michael Shavit <mshavit@google.com>
---
As discussed in v1 RFC, an alternative would be to support assigning a
different VMID/ASID to a domain for each SMMU that it is installed to.
This is more flexible but will require more work to achieve.
(no changes since v1)
drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c | 10 +++-------
drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h | 1 -
2 files changed, 3 insertions(+), 8 deletions(-)
diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
index 1d072fd38a2d6..9adc2cedd487b 100644
--- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
+++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
@@ -73,6 +73,7 @@ struct arm_smmu_option_prop {
DEFINE_XARRAY_ALLOC1(arm_smmu_asid_xa);
DEFINE_MUTEX(arm_smmu_asid_lock);
+DEFINE_IDA(arm_smmu_vmid_ida);
/*
* Special value used by SVA when a process dies, to quiesce a CD without
@@ -2130,7 +2131,6 @@ static struct iommu_domain *arm_smmu_domain_alloc(unsigned type)
static void arm_smmu_domain_free(struct iommu_domain *domain)
{
struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain);
- struct arm_smmu_device *smmu = smmu_domain->smmu;
free_io_pgtable_ops(smmu_domain->pgtbl_ops);
@@ -2143,7 +2143,7 @@ static void arm_smmu_domain_free(struct iommu_domain *domain)
} else {
struct arm_smmu_s2_cfg *cfg = &smmu_domain->s2_cfg;
if (cfg->vmid)
- ida_free(&smmu->vmid_map, cfg->vmid);
+ ida_free(&arm_smmu_vmid_ida, cfg->vmid);
}
kfree(smmu_domain);
@@ -2195,7 +2195,7 @@ static int arm_smmu_domain_finalise_s2(struct arm_smmu_domain *smmu_domain,
typeof(&pgtbl_cfg->arm_lpae_s2_cfg.vtcr) vtcr;
/* Reserve VMID 0 for stage-2 bypass STEs */
- vmid = ida_alloc_range(&smmu->vmid_map, 1, (1 << smmu->vmid_bits) - 1,
+ vmid = ida_alloc_range(&arm_smmu_vmid_ida, 1, (1 << smmu->vmid_bits) - 1,
GFP_KERNEL);
if (vmid < 0)
return vmid;
@@ -3169,9 +3169,6 @@ static int arm_smmu_init_strtab(struct arm_smmu_device *smmu)
reg = smmu->strtab_cfg.strtab_dma & STRTAB_BASE_ADDR_MASK;
reg |= STRTAB_BASE_RA;
smmu->strtab_cfg.strtab_base = reg;
-
- ida_init(&smmu->vmid_map);
-
return 0;
}
@@ -3995,7 +3992,6 @@ static void arm_smmu_device_remove(struct platform_device *pdev)
iommu_device_sysfs_remove(&smmu->iommu);
arm_smmu_device_disable(smmu);
iopf_queue_free(smmu->evtq.iopf);
- ida_destroy(&smmu->vmid_map);
}
static void arm_smmu_device_shutdown(struct platform_device *pdev)
diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h
index b0cf9c33e6bcd..1661d3252bac5 100644
--- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h
+++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h
@@ -670,7 +670,6 @@ struct arm_smmu_device {
#define ARM_SMMU_MAX_VMIDS (1 << 16)
unsigned int vmid_bits;
- struct ida vmid_map;
unsigned int ssid_bits;
unsigned int sid_bits;
--
2.42.0.rc1.204.g551eb34607-goog
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 15+ messages in thread* [RFC PATCH v2 6/9] iommu/arm-smmu-v3: check smmu compatibility on attach
2023-08-22 10:56 [RFC PATCH v2 0/9] Install domain onto multiple smmus Michael Shavit
` (4 preceding siblings ...)
2023-08-22 10:57 ` [RFC PATCH v2 5/9] iommu/arm-smmu-v3: Alloc vmid from global pool Michael Shavit
@ 2023-08-22 10:57 ` Michael Shavit
2023-08-22 10:57 ` [RFC PATCH v2 7/9] iommu/arm-smmu-v3: Add arm_smmu_device as a parameter to domain_finalise Michael Shavit
` (3 subsequent siblings)
9 siblings, 0 replies; 15+ messages in thread
From: Michael Shavit @ 2023-08-22 10:57 UTC (permalink / raw)
To: iommu, linux-arm-kernel, linux-kernel
Cc: nicolinc, tina.zhang, jean-philippe, will, robin.murphy, jgg,
Michael Shavit, Dawei Li, Jason Gunthorpe, Joerg Roedel,
Kirill A. Shutemov, Lu Baolu, Mark Brown
Verify a domain's compatibility with the smmu when it's being attached
to a master belonging to a different smmu device.
Signed-off-by: Michael Shavit <mshavit@google.com>
---
Changes in v2:
- Access the pgtbl_cfg from the pgtable_ops instead of storing a copy in
the arm_smmu_domain.
drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c | 94 +++++++++++++++++----
1 file changed, 79 insertions(+), 15 deletions(-)
diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
index 9adc2cedd487b..2f305037b9250 100644
--- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
+++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
@@ -2213,10 +2213,41 @@ static int arm_smmu_domain_finalise_s2(struct arm_smmu_domain *smmu_domain,
return 0;
}
+static int arm_smmu_prepare_pgtbl_cfg(struct arm_smmu_device *smmu,
+ enum arm_smmu_domain_stage stage,
+ struct io_pgtable_cfg *pgtbl_cfg)
+{
+ unsigned long ias, oas;
+
+ switch (stage) {
+ case ARM_SMMU_DOMAIN_S1:
+ ias = (smmu->features & ARM_SMMU_FEAT_VAX) ? 52 : 48;
+ ias = min_t(unsigned long, ias, VA_BITS);
+ oas = smmu->ias;
+ break;
+ case ARM_SMMU_DOMAIN_NESTED:
+ case ARM_SMMU_DOMAIN_S2:
+ ias = smmu->ias;
+ oas = smmu->oas;
+ break;
+ default:
+ return -EINVAL;
+ }
+
+ *pgtbl_cfg = (struct io_pgtable_cfg) {
+ .pgsize_bitmap = smmu->pgsize_bitmap,
+ .ias = ias,
+ .oas = oas,
+ .coherent_walk = smmu->features & ARM_SMMU_FEAT_COHERENCY,
+ .tlb = &arm_smmu_flush_ops,
+ .iommu_dev = smmu->dev,
+ };
+ return 0;
+}
+
static int arm_smmu_domain_finalise(struct iommu_domain *domain)
{
int ret;
- unsigned long ias, oas;
enum io_pgtable_fmt fmt;
struct io_pgtable_cfg pgtbl_cfg;
struct io_pgtable_ops *pgtbl_ops;
@@ -2238,16 +2269,11 @@ static int arm_smmu_domain_finalise(struct iommu_domain *domain)
switch (smmu_domain->stage) {
case ARM_SMMU_DOMAIN_S1:
- ias = (smmu->features & ARM_SMMU_FEAT_VAX) ? 52 : 48;
- ias = min_t(unsigned long, ias, VA_BITS);
- oas = smmu->ias;
fmt = ARM_64_LPAE_S1;
finalise_stage_fn = arm_smmu_domain_finalise_s1;
break;
case ARM_SMMU_DOMAIN_NESTED:
case ARM_SMMU_DOMAIN_S2:
- ias = smmu->ias;
- oas = smmu->oas;
fmt = ARM_64_LPAE_S2;
finalise_stage_fn = arm_smmu_domain_finalise_s2;
break;
@@ -2255,14 +2281,9 @@ static int arm_smmu_domain_finalise(struct iommu_domain *domain)
return -EINVAL;
}
- pgtbl_cfg = (struct io_pgtable_cfg) {
- .pgsize_bitmap = smmu->pgsize_bitmap,
- .ias = ias,
- .oas = oas,
- .coherent_walk = smmu->features & ARM_SMMU_FEAT_COHERENCY,
- .tlb = &arm_smmu_flush_ops,
- .iommu_dev = smmu->dev,
- };
+ ret = arm_smmu_prepare_pgtbl_cfg(smmu, smmu_domain->stage, &pgtbl_cfg);
+ if (ret)
+ return ret;
pgtbl_ops = alloc_io_pgtable_ops(fmt, &pgtbl_cfg, smmu_domain);
if (!pgtbl_ops)
@@ -2424,6 +2445,48 @@ static void arm_smmu_disable_pasid(struct arm_smmu_master *master)
pci_disable_pasid(pdev);
}
+static int
+arm_smmu_verify_domain_compatible(struct arm_smmu_device *smmu,
+ struct arm_smmu_domain *smmu_domain)
+{
+ struct io_pgtable_cfg pgtbl_cfg;
+ struct io_pgtable_cfg *domain_pgtbl_cfg =
+ &io_pgtable_ops_to_pgtable(smmu_domain->pgtbl_ops)->cfg;
+ int ret;
+
+ if (smmu_domain->domain.type == IOMMU_DOMAIN_IDENTITY)
+ return 0;
+
+ if (smmu_domain->stage == ARM_SMMU_DOMAIN_S2) {
+ if (!(smmu->features & ARM_SMMU_FEAT_TRANS_S2))
+ return -EINVAL;
+ if (smmu_domain->s2_cfg.vmid >> smmu->vmid_bits)
+ return -EINVAL;
+ } else if (smmu_domain->stage == ARM_SMMU_DOMAIN_S1) {
+ if (!(smmu->features & ARM_SMMU_FEAT_TRANS_S1))
+ return -EINVAL;
+ if (smmu_domain->cd.asid >> smmu->asid_bits)
+ return -EINVAL;
+ }
+
+ ret = arm_smmu_prepare_pgtbl_cfg(smmu, smmu_domain->stage, &pgtbl_cfg);
+ if (ret)
+ return ret;
+
+ if (domain_pgtbl_cfg->ias > pgtbl_cfg.ias ||
+ domain_pgtbl_cfg->oas > pgtbl_cfg.oas ||
+ /*
+ * The supported pgsize_bitmap must be a superset of the domain's
+ * pgsize_bitmap.
+ */
+ (domain_pgtbl_cfg->pgsize_bitmap ^ pgtbl_cfg.pgsize_bitmap) &
+ domain_pgtbl_cfg->pgsize_bitmap ||
+ domain_pgtbl_cfg->coherent_walk != pgtbl_cfg.coherent_walk)
+ return -EINVAL;
+
+ return 0;
+}
+
static void arm_smmu_detach_dev(struct arm_smmu_master *master)
{
unsigned long flags;
@@ -2505,7 +2568,8 @@ static int arm_smmu_attach_dev(struct iommu_domain *domain, struct device *dev)
ret = arm_smmu_domain_finalise(domain);
if (ret)
smmu_domain->smmu = NULL;
- } else if (smmu_domain->smmu != smmu)
+ } else if (smmu_domain->smmu != smmu ||
+ !arm_smmu_verify_domain_compatible(smmu, smmu_domain))
ret = -EINVAL;
mutex_unlock(&smmu_domain->init_mutex);
--
2.42.0.rc1.204.g551eb34607-goog
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 15+ messages in thread* [RFC PATCH v2 7/9] iommu/arm-smmu-v3: Add arm_smmu_device as a parameter to domain_finalise
2023-08-22 10:56 [RFC PATCH v2 0/9] Install domain onto multiple smmus Michael Shavit
` (5 preceding siblings ...)
2023-08-22 10:57 ` [RFC PATCH v2 6/9] iommu/arm-smmu-v3: check smmu compatibility on attach Michael Shavit
@ 2023-08-22 10:57 ` Michael Shavit
2023-08-22 10:57 ` [RFC PATCH v2 8/9] iommu/arm-smmu-v3: check for domain initialization using pgtbl_ops Michael Shavit
` (2 subsequent siblings)
9 siblings, 0 replies; 15+ messages in thread
From: Michael Shavit @ 2023-08-22 10:57 UTC (permalink / raw)
To: iommu, linux-arm-kernel, linux-kernel
Cc: nicolinc, tina.zhang, jean-philippe, will, robin.murphy, jgg,
Michael Shavit, Dawei Li, Jason Gunthorpe, Joerg Roedel,
Kirill A. Shutemov, Lu Baolu, Mark Brown
Remove the usage of arm_smmu_domain->smmu in arm_smmu_domain_finalise as
it will be removed in a subsequent commit.
Signed-off-by: Michael Shavit <mshavit@google.com>
---
(no changes since v1)
drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c | 13 +++++++------
1 file changed, 7 insertions(+), 6 deletions(-)
diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
index 2f305037b9250..7c9897702bcde 100644
--- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
+++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
@@ -2150,11 +2150,11 @@ static void arm_smmu_domain_free(struct iommu_domain *domain)
}
static int arm_smmu_domain_finalise_s1(struct arm_smmu_domain *smmu_domain,
+ struct arm_smmu_device *smmu,
struct io_pgtable_cfg *pgtbl_cfg)
{
int ret;
u32 asid;
- struct arm_smmu_device *smmu = smmu_domain->smmu;
struct arm_smmu_ctx_desc *cd = &smmu_domain->cd;
typeof(&pgtbl_cfg->arm_lpae_s1_cfg.tcr) tcr = &pgtbl_cfg->arm_lpae_s1_cfg.tcr;
@@ -2187,10 +2187,10 @@ static int arm_smmu_domain_finalise_s1(struct arm_smmu_domain *smmu_domain,
}
static int arm_smmu_domain_finalise_s2(struct arm_smmu_domain *smmu_domain,
+ struct arm_smmu_device *smmu,
struct io_pgtable_cfg *pgtbl_cfg)
{
int vmid;
- struct arm_smmu_device *smmu = smmu_domain->smmu;
struct arm_smmu_s2_cfg *cfg = &smmu_domain->s2_cfg;
typeof(&pgtbl_cfg->arm_lpae_s2_cfg.vtcr) vtcr;
@@ -2245,16 +2245,17 @@ static int arm_smmu_prepare_pgtbl_cfg(struct arm_smmu_device *smmu,
return 0;
}
-static int arm_smmu_domain_finalise(struct iommu_domain *domain)
+static int arm_smmu_domain_finalise(struct iommu_domain *domain,
+ struct arm_smmu_device *smmu)
{
int ret;
enum io_pgtable_fmt fmt;
struct io_pgtable_cfg pgtbl_cfg;
struct io_pgtable_ops *pgtbl_ops;
int (*finalise_stage_fn)(struct arm_smmu_domain *,
+ struct arm_smmu_device *,
struct io_pgtable_cfg *);
struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain);
- struct arm_smmu_device *smmu = smmu_domain->smmu;
if (domain->type == IOMMU_DOMAIN_IDENTITY) {
smmu_domain->stage = ARM_SMMU_DOMAIN_BYPASS;
@@ -2293,7 +2294,7 @@ static int arm_smmu_domain_finalise(struct iommu_domain *domain)
domain->geometry.aperture_end = (1UL << pgtbl_cfg.ias) - 1;
domain->geometry.force_aperture = true;
- ret = finalise_stage_fn(smmu_domain, &pgtbl_cfg);
+ ret = finalise_stage_fn(smmu_domain, smmu, &pgtbl_cfg);
if (ret < 0) {
free_io_pgtable_ops(pgtbl_ops);
return ret;
@@ -2565,7 +2566,7 @@ static int arm_smmu_attach_dev(struct iommu_domain *domain, struct device *dev)
if (!smmu_domain->smmu) {
smmu_domain->smmu = smmu;
- ret = arm_smmu_domain_finalise(domain);
+ ret = arm_smmu_domain_finalise(domain, smmu);
if (ret)
smmu_domain->smmu = NULL;
} else if (smmu_domain->smmu != smmu ||
--
2.42.0.rc1.204.g551eb34607-goog
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 15+ messages in thread* [RFC PATCH v2 8/9] iommu/arm-smmu-v3: check for domain initialization using pgtbl_ops
2023-08-22 10:56 [RFC PATCH v2 0/9] Install domain onto multiple smmus Michael Shavit
` (6 preceding siblings ...)
2023-08-22 10:57 ` [RFC PATCH v2 7/9] iommu/arm-smmu-v3: Add arm_smmu_device as a parameter to domain_finalise Michael Shavit
@ 2023-08-22 10:57 ` Michael Shavit
2023-08-22 10:57 ` [RFC PATCH v2 9/9] iommu/arm-smmu-v3: allow multi-SMMU domain installs Michael Shavit
2023-08-23 2:42 ` [RFC PATCH v2 0/9] Install domain onto multiple smmus Baolu Lu
9 siblings, 0 replies; 15+ messages in thread
From: Michael Shavit @ 2023-08-22 10:57 UTC (permalink / raw)
To: iommu, linux-arm-kernel, linux-kernel
Cc: nicolinc, tina.zhang, jean-philippe, will, robin.murphy, jgg,
Michael Shavit, Dawei Li, Jason Gunthorpe, Joerg Roedel,
Kirill A. Shutemov, Lu Baolu, Mark Brown
In order to remove smmu_domain->smmu in the next commit
Signed-off-by: Michael Shavit <mshavit@google.com>
---
(no changes since v1)
drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
index 7c9897702bcde..9f8b701771fc3 100644
--- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
+++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
@@ -2894,7 +2894,7 @@ static int arm_smmu_enable_nesting(struct iommu_domain *domain)
int ret = 0;
mutex_lock(&smmu_domain->init_mutex);
- if (smmu_domain->smmu)
+ if (smmu_domain->pgtbl_ops)
ret = -EPERM;
else
smmu_domain->stage = ARM_SMMU_DOMAIN_NESTED;
--
2.42.0.rc1.204.g551eb34607-goog
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 15+ messages in thread* [RFC PATCH v2 9/9] iommu/arm-smmu-v3: allow multi-SMMU domain installs.
2023-08-22 10:56 [RFC PATCH v2 0/9] Install domain onto multiple smmus Michael Shavit
` (7 preceding siblings ...)
2023-08-22 10:57 ` [RFC PATCH v2 8/9] iommu/arm-smmu-v3: check for domain initialization using pgtbl_ops Michael Shavit
@ 2023-08-22 10:57 ` Michael Shavit
2023-08-23 2:42 ` [RFC PATCH v2 0/9] Install domain onto multiple smmus Baolu Lu
9 siblings, 0 replies; 15+ messages in thread
From: Michael Shavit @ 2023-08-22 10:57 UTC (permalink / raw)
To: iommu, linux-arm-kernel, linux-kernel
Cc: nicolinc, tina.zhang, jean-philippe, will, robin.murphy, jgg,
Michael Shavit, Dawei Li, Jason Gunthorpe, Joerg Roedel,
Kirill A. Shutemov, Lu Baolu, Mark Brown
Remove the arm_smmu_domain->smmu handle now that a domain may be
attached to devices with different upstream SMMUs.
Signed-off-by: Michael Shavit <mshavit@google.com>
---
(no changes since v1)
drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c | 10 +++-------
drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h | 3 +--
2 files changed, 4 insertions(+), 9 deletions(-)
diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
index 9f8b701771fc3..55c0b8aecfb0a 100644
--- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
+++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
@@ -2564,13 +2564,9 @@ static int arm_smmu_attach_dev(struct iommu_domain *domain, struct device *dev)
mutex_lock(&smmu_domain->init_mutex);
- if (!smmu_domain->smmu) {
- smmu_domain->smmu = smmu;
- ret = arm_smmu_domain_finalise(domain, smmu);
- if (ret)
- smmu_domain->smmu = NULL;
- } else if (smmu_domain->smmu != smmu ||
- !arm_smmu_verify_domain_compatible(smmu, smmu_domain))
+ if (!smmu_domain->pgtbl_ops)
+ ret = arm_smmu_domain_finalise(&smmu_domain->domain, smmu);
+ else if (!arm_smmu_verify_domain_compatible(smmu, smmu_domain))
ret = -EINVAL;
mutex_unlock(&smmu_domain->init_mutex);
diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h
index 1661d3252bac5..fcf3845f4659c 100644
--- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h
+++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h
@@ -716,8 +716,7 @@ enum arm_smmu_domain_stage {
};
struct arm_smmu_domain {
- struct arm_smmu_device *smmu;
- struct mutex init_mutex; /* Protects smmu pointer */
+ struct mutex init_mutex; /* Protects pgtbl_ops pointer */
struct io_pgtable_ops *pgtbl_ops;
atomic_t nr_ats_masters;
--
2.42.0.rc1.204.g551eb34607-goog
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 15+ messages in thread* Re: [RFC PATCH v2 0/9] Install domain onto multiple smmus
2023-08-22 10:56 [RFC PATCH v2 0/9] Install domain onto multiple smmus Michael Shavit
` (8 preceding siblings ...)
2023-08-22 10:57 ` [RFC PATCH v2 9/9] iommu/arm-smmu-v3: allow multi-SMMU domain installs Michael Shavit
@ 2023-08-23 2:42 ` Baolu Lu
9 siblings, 0 replies; 15+ messages in thread
From: Baolu Lu @ 2023-08-23 2:42 UTC (permalink / raw)
To: Michael Shavit, iommu, linux-arm-kernel, linux-kernel
Cc: baolu.lu, nicolinc, tina.zhang, jean-philippe, will, robin.murphy,
jgg, Dawei Li, Jason Gunthorpe, Joerg Roedel, Kirill A. Shutemov,
Mark Brown
On 2023/8/22 18:56, Michael Shavit wrote:
>
> Hi all,
>
> This series refactors the arm-smmu-v3 driver to support attaching
> domains onto masters belonging to different smmu devices.
>
> The main objective of this series is allow further refactorings of
> arm-smmu-v3-sva. Specifically, we'd like to reach the state where:
> 1. A single SVA domain is allocated per MM/ASID
The core side of this work is under discussion.
https://lore.kernel.org/linux-iommu/20230808074944.7825-1-tina.zhang@intel.com/
Just FYI.
Best regards,
baolu
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 15+ messages in thread