* [PATCH 00/14] Minor fixups and refactorings for AMD's io-pgtable code
@ 2024-08-21 17:37 Jason Gunthorpe
2024-08-21 17:37 ` [PATCH 01/14] iommu/amd: Move allocation of the top table into v1_alloc_pgtable Jason Gunthorpe
` (13 more replies)
0 siblings, 14 replies; 32+ messages in thread
From: Jason Gunthorpe @ 2024-08-21 17:37 UTC (permalink / raw)
To: iommu, Joerg Roedel, Robin Murphy, Suravee Suthikulpanit,
Will Deacon
Cc: Alejandro Jimenez, Joao Martins, Joerg Roedel, patches,
Vasant Hegde
This is some preparation for the Generic PT work here:
https://patch.msgid.link/r/0-v1-01fa10580981+1d-iommu_pt_jgg@nvidia.com
Where this aims to increase the separation between the page table code and
the rest of the driver. Ideally the io_pgtable layer should be completely
self contained and not reach out into the IOMMU driver's
iommu_domain. This doesn't fully get there, but it does enough for now.
Jason Gunthorpe (14):
iommu/amd: Move allocation of the top table into v1_alloc_pgtable
iommu/amd: Allocate the page table root using GFP_KERNEL
iommu/amd: Set the pgsize_bitmap correctly
iommu/amd: Remove amd_iommu_domain_update() from page table freeing
iommu/amd: Remove the amd_iommu_domain_set_pt_root() and related
iommu/amd: Rename struct amd_io_pgtable iopt to pgtbl
iommu/amd: Remove amd_io_pgtable::pgtbl_cfg
iommu/amd: Store the nid in io_pgtable_cfg instead of the domain
iommu/amd: Narrow the use of struct protection_domain to invalidation
iommu/amd: Remove conditions from domain free paths
iommu/amd: Fix typo of , instead of ;
iommu/amd: Remove the confusing dummy iommu_flush_ops tlb ops
iommu/amd: Correct the reported page sizes from the V1 table
iommu/amd: Do not set the D bit on AMD v2 table entries
drivers/iommu/amd/amd_iommu.h | 15 +----
drivers/iommu/amd/amd_iommu_types.h | 11 ++-
drivers/iommu/amd/io_pgtable.c | 101 +++++++++-------------------
drivers/iommu/amd/io_pgtable_v2.c | 56 +++++----------
drivers/iommu/amd/iommu.c | 96 ++++++++------------------
drivers/iommu/amd/pasid.c | 2 +-
include/linux/io-pgtable.h | 4 ++
7 files changed, 86 insertions(+), 199 deletions(-)
base-commit: 47ac09b91befbb6a235ab620c32af719f8208399
--
2.46.0
^ permalink raw reply [flat|nested] 32+ messages in thread
* [PATCH 01/14] iommu/amd: Move allocation of the top table into v1_alloc_pgtable
2024-08-21 17:37 [PATCH 00/14] Minor fixups and refactorings for AMD's io-pgtable code Jason Gunthorpe
@ 2024-08-21 17:37 ` Jason Gunthorpe
2024-08-23 9:26 ` Joerg Roedel
2024-08-28 6:22 ` Vasant Hegde
2024-08-21 17:37 ` [PATCH 02/14] iommu/amd: Allocate the page table root using GFP_KERNEL Jason Gunthorpe
` (12 subsequent siblings)
13 siblings, 2 replies; 32+ messages in thread
From: Jason Gunthorpe @ 2024-08-21 17:37 UTC (permalink / raw)
To: iommu, Joerg Roedel, Robin Murphy, Suravee Suthikulpanit,
Will Deacon
Cc: Alejandro Jimenez, Joao Martins, Joerg Roedel, patches,
Vasant Hegde
All the page table memory should be allocated/free within the io_pgtable
struct. The v2 path is already doing this, make it consistent.
It is hard to see but the free of the root in protection_domain_free() is
a NOP on the success path because v1_free_pgtable() does
amd_iommu_domain_clr_pt_root().
The root memory is already freed because free_sub_pt() put it on the
freelist. The free path in protection_domain_free() is only used during
error unwind of protection_domain_alloc().
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
---
drivers/iommu/amd/io_pgtable.c | 8 ++++++--
drivers/iommu/amd/iommu.c | 21 ++-------------------
2 files changed, 8 insertions(+), 21 deletions(-)
diff --git a/drivers/iommu/amd/io_pgtable.c b/drivers/iommu/amd/io_pgtable.c
index 1074ee25064d06..05aed3cb46f1bf 100644
--- a/drivers/iommu/amd/io_pgtable.c
+++ b/drivers/iommu/amd/io_pgtable.c
@@ -574,20 +574,24 @@ static void v1_free_pgtable(struct io_pgtable *iop)
pgtable->mode > PAGE_MODE_6_LEVEL);
free_sub_pt(pgtable->root, pgtable->mode, &freelist);
+ iommu_put_pages_list(&freelist);
/* Update data structure */
amd_iommu_domain_clr_pt_root(dom);
/* Make changes visible to IOMMUs */
amd_iommu_domain_update(dom);
-
- iommu_put_pages_list(&freelist);
}
static struct io_pgtable *v1_alloc_pgtable(struct io_pgtable_cfg *cfg, void *cookie)
{
struct amd_io_pgtable *pgtable = io_pgtable_cfg_to_data(cfg);
+ pgtable->root = iommu_alloc_page(GFP_KERNEL);
+ if (!pgtable->root)
+ return NULL;
+ pgtable->mode = PAGE_MODE_3_LEVEL;
+
cfg->pgsize_bitmap = AMD_IOMMU_PGSIZES;
cfg->ias = IOMMU_IN_ADDR_BIT_SIZE;
cfg->oas = IOMMU_OUT_ADDR_BIT_SIZE;
diff --git a/drivers/iommu/amd/iommu.c b/drivers/iommu/amd/iommu.c
index b19e8c0f48fa25..b42e695af6dbe9 100644
--- a/drivers/iommu/amd/iommu.c
+++ b/drivers/iommu/amd/iommu.c
@@ -52,8 +52,6 @@
#define HT_RANGE_START (0xfd00000000ULL)
#define HT_RANGE_END (0xffffffffffULL)
-#define DEFAULT_PGTABLE_LEVEL PAGE_MODE_3_LEVEL
-
static DEFINE_SPINLOCK(pd_bitmap_lock);
LIST_HEAD(ioapic_map);
@@ -2265,30 +2263,15 @@ void protection_domain_free(struct protection_domain *domain)
if (domain->iop.pgtbl_cfg.tlb)
free_io_pgtable_ops(&domain->iop.iop.ops);
- if (domain->iop.root)
- iommu_free_page(domain->iop.root);
-
if (domain->id)
domain_id_free(domain->id);
kfree(domain);
}
-static int protection_domain_init_v1(struct protection_domain *domain, int mode)
+static int protection_domain_init_v1(struct protection_domain *domain)
{
- u64 *pt_root = NULL;
-
- BUG_ON(mode < PAGE_MODE_NONE || mode > PAGE_MODE_6_LEVEL);
-
- if (mode != PAGE_MODE_NONE) {
- pt_root = iommu_alloc_page(GFP_KERNEL);
- if (!pt_root)
- return -ENOMEM;
- }
-
domain->pd_mode = PD_MODE_V1;
- amd_iommu_domain_set_pgtable(domain, pt_root, mode);
-
return 0;
}
@@ -2341,7 +2324,7 @@ struct protection_domain *protection_domain_alloc(unsigned int type)
switch (pgtable) {
case AMD_IOMMU_V1:
- ret = protection_domain_init_v1(domain, DEFAULT_PGTABLE_LEVEL);
+ ret = protection_domain_init_v1(domain);
break;
case AMD_IOMMU_V2:
ret = protection_domain_init_v2(domain);
--
2.46.0
^ permalink raw reply related [flat|nested] 32+ messages in thread
* [PATCH 02/14] iommu/amd: Allocate the page table root using GFP_KERNEL
2024-08-21 17:37 [PATCH 00/14] Minor fixups and refactorings for AMD's io-pgtable code Jason Gunthorpe
2024-08-21 17:37 ` [PATCH 01/14] iommu/amd: Move allocation of the top table into v1_alloc_pgtable Jason Gunthorpe
@ 2024-08-21 17:37 ` Jason Gunthorpe
2024-08-27 14:38 ` Vasant Hegde
2024-08-21 17:37 ` [PATCH 03/14] iommu/amd: Set the pgsize_bitmap correctly Jason Gunthorpe
` (11 subsequent siblings)
13 siblings, 1 reply; 32+ messages in thread
From: Jason Gunthorpe @ 2024-08-21 17:37 UTC (permalink / raw)
To: iommu, Joerg Roedel, Robin Murphy, Suravee Suthikulpanit,
Will Deacon
Cc: Alejandro Jimenez, Joao Martins, Joerg Roedel, patches,
Vasant Hegde
Domain allocation is always done under a sleepable context, the v1 path
and other drivers use GFP_KERNEL already. Fix the v2 path to also use
GFP_KERNEL.
Fixes: 0d571dcbe7c6 ("iommu/amd: Allocate page table using numa locality info")
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
---
drivers/iommu/amd/io_pgtable_v2.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/iommu/amd/io_pgtable_v2.c b/drivers/iommu/amd/io_pgtable_v2.c
index 664e91c88748ea..6088822180e1e4 100644
--- a/drivers/iommu/amd/io_pgtable_v2.c
+++ b/drivers/iommu/amd/io_pgtable_v2.c
@@ -362,7 +362,7 @@ static struct io_pgtable *v2_alloc_pgtable(struct io_pgtable_cfg *cfg, void *coo
struct protection_domain *pdom = (struct protection_domain *)cookie;
int ias = IOMMU_IN_ADDR_BIT_SIZE;
- pgtable->pgd = iommu_alloc_page_node(pdom->nid, GFP_ATOMIC);
+ pgtable->pgd = iommu_alloc_page_node(pdom->nid, GFP_KERNEL);
if (!pgtable->pgd)
return NULL;
--
2.46.0
^ permalink raw reply related [flat|nested] 32+ messages in thread
* [PATCH 03/14] iommu/amd: Set the pgsize_bitmap correctly
2024-08-21 17:37 [PATCH 00/14] Minor fixups and refactorings for AMD's io-pgtable code Jason Gunthorpe
2024-08-21 17:37 ` [PATCH 01/14] iommu/amd: Move allocation of the top table into v1_alloc_pgtable Jason Gunthorpe
2024-08-21 17:37 ` [PATCH 02/14] iommu/amd: Allocate the page table root using GFP_KERNEL Jason Gunthorpe
@ 2024-08-21 17:37 ` Jason Gunthorpe
2024-08-28 6:26 ` Vasant Hegde
2024-08-21 17:37 ` [PATCH 04/14] iommu/amd: Remove amd_iommu_domain_update() from page table freeing Jason Gunthorpe
` (10 subsequent siblings)
13 siblings, 1 reply; 32+ messages in thread
From: Jason Gunthorpe @ 2024-08-21 17:37 UTC (permalink / raw)
To: iommu, Joerg Roedel, Robin Murphy, Suravee Suthikulpanit,
Will Deacon
Cc: Alejandro Jimenez, Joao Martins, Joerg Roedel, patches,
Vasant Hegde
When using io_pgtable the correct pgsize_bitmap is stored in the cfg, both
v1_alloc_pgtable() and v2_alloc_pgtable() set it correctly.
This fixes a bug where the v2 pgtable had the wrong pgsize as
protection_domain_init_v2() would set it and then do_iommu_domain_alloc()
immediately resets it.
Remove the confusing ops.pgsize_bitmap since that is not used if the
driver sets domain.pgsize_bitmap.
Fixes: 134288158a41 ("iommu/amd: Add domain_alloc_user based domain allocation")
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
---
drivers/iommu/amd/iommu.c | 28 ++++------------------------
1 file changed, 4 insertions(+), 24 deletions(-)
diff --git a/drivers/iommu/amd/iommu.c b/drivers/iommu/amd/iommu.c
index b42e695af6dbe9..e53ffb86c3d09b 100644
--- a/drivers/iommu/amd/iommu.c
+++ b/drivers/iommu/amd/iommu.c
@@ -2269,26 +2269,11 @@ void protection_domain_free(struct protection_domain *domain)
kfree(domain);
}
-static int protection_domain_init_v1(struct protection_domain *domain)
-{
- domain->pd_mode = PD_MODE_V1;
- return 0;
-}
-
-static int protection_domain_init_v2(struct protection_domain *pdom)
-{
- pdom->pd_mode = PD_MODE_V2;
- pdom->domain.pgsize_bitmap = AMD_IOMMU_PGSIZES_V2;
-
- return 0;
-}
-
struct protection_domain *protection_domain_alloc(unsigned int type)
{
struct io_pgtable_ops *pgtbl_ops;
struct protection_domain *domain;
int pgtable;
- int ret;
domain = kzalloc(sizeof(*domain), GFP_KERNEL);
if (!domain)
@@ -2324,18 +2309,14 @@ struct protection_domain *protection_domain_alloc(unsigned int type)
switch (pgtable) {
case AMD_IOMMU_V1:
- ret = protection_domain_init_v1(domain);
+ domain->pd_mode = PD_MODE_V1;
break;
case AMD_IOMMU_V2:
- ret = protection_domain_init_v2(domain);
+ domain->pd_mode = PD_MODE_V2;
break;
default:
- ret = -EINVAL;
- break;
- }
-
- if (ret)
goto out_err;
+ }
pgtbl_ops = alloc_io_pgtable_ops(pgtable, &domain->iop.pgtbl_cfg, domain);
if (!pgtbl_ops)
@@ -2388,10 +2369,10 @@ static struct iommu_domain *do_iommu_domain_alloc(unsigned int type,
domain->domain.geometry.aperture_start = 0;
domain->domain.geometry.aperture_end = dma_max_address();
domain->domain.geometry.force_aperture = true;
+ domain->domain.pgsize_bitmap = domain->iop.iop.cfg.pgsize_bitmap;
if (iommu) {
domain->domain.type = type;
- domain->domain.pgsize_bitmap = iommu->iommu.ops->pgsize_bitmap;
domain->domain.ops = iommu->iommu.ops->default_domain_ops;
if (dirty_tracking)
@@ -2850,7 +2831,6 @@ const struct iommu_ops amd_iommu_ops = {
.device_group = amd_iommu_device_group,
.get_resv_regions = amd_iommu_get_resv_regions,
.is_attach_deferred = amd_iommu_is_attach_deferred,
- .pgsize_bitmap = AMD_IOMMU_PGSIZES,
.def_domain_type = amd_iommu_def_domain_type,
.dev_enable_feat = amd_iommu_dev_enable_feature,
.dev_disable_feat = amd_iommu_dev_disable_feature,
--
2.46.0
^ permalink raw reply related [flat|nested] 32+ messages in thread
* [PATCH 04/14] iommu/amd: Remove amd_iommu_domain_update() from page table freeing
2024-08-21 17:37 [PATCH 00/14] Minor fixups and refactorings for AMD's io-pgtable code Jason Gunthorpe
` (2 preceding siblings ...)
2024-08-21 17:37 ` [PATCH 03/14] iommu/amd: Set the pgsize_bitmap correctly Jason Gunthorpe
@ 2024-08-21 17:37 ` Jason Gunthorpe
2024-08-28 6:26 ` Vasant Hegde
2024-08-21 17:37 ` [PATCH 05/14] iommu/amd: Remove the amd_iommu_domain_set_pt_root() and related Jason Gunthorpe
` (9 subsequent siblings)
13 siblings, 1 reply; 32+ messages in thread
From: Jason Gunthorpe @ 2024-08-21 17:37 UTC (permalink / raw)
To: iommu, Joerg Roedel, Robin Murphy, Suravee Suthikulpanit,
Will Deacon
Cc: Alejandro Jimenez, Joao Martins, Joerg Roedel, patches,
Vasant Hegde
It is a serious bug if the domain is still mapped to any DTEs when it is
freed as we immediately start freeing page table memory, so any remaining
HW touch will UAF.
If it is not mapped then dev_list is empty and amd_iommu_domain_update()
does nothing.
Remove it and add a WARN_ON() to catch this class of bug.
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
---
drivers/iommu/amd/io_pgtable.c | 3 ---
drivers/iommu/amd/iommu.c | 2 ++
2 files changed, 2 insertions(+), 3 deletions(-)
diff --git a/drivers/iommu/amd/io_pgtable.c b/drivers/iommu/amd/io_pgtable.c
index 05aed3cb46f1bf..b3991ad1ae8ea3 100644
--- a/drivers/iommu/amd/io_pgtable.c
+++ b/drivers/iommu/amd/io_pgtable.c
@@ -578,9 +578,6 @@ static void v1_free_pgtable(struct io_pgtable *iop)
/* Update data structure */
amd_iommu_domain_clr_pt_root(dom);
-
- /* Make changes visible to IOMMUs */
- amd_iommu_domain_update(dom);
}
static struct io_pgtable *v1_alloc_pgtable(struct io_pgtable_cfg *cfg, void *cookie)
diff --git a/drivers/iommu/amd/iommu.c b/drivers/iommu/amd/iommu.c
index e53ffb86c3d09b..426aecacc63009 100644
--- a/drivers/iommu/amd/iommu.c
+++ b/drivers/iommu/amd/iommu.c
@@ -2260,6 +2260,8 @@ void protection_domain_free(struct protection_domain *domain)
if (!domain)
return;
+ WARN_ON(!list_empty(&domain->dev_list));
+
if (domain->iop.pgtbl_cfg.tlb)
free_io_pgtable_ops(&domain->iop.iop.ops);
--
2.46.0
^ permalink raw reply related [flat|nested] 32+ messages in thread
* [PATCH 05/14] iommu/amd: Remove the amd_iommu_domain_set_pt_root() and related
2024-08-21 17:37 [PATCH 00/14] Minor fixups and refactorings for AMD's io-pgtable code Jason Gunthorpe
` (3 preceding siblings ...)
2024-08-21 17:37 ` [PATCH 04/14] iommu/amd: Remove amd_iommu_domain_update() from page table freeing Jason Gunthorpe
@ 2024-08-21 17:37 ` Jason Gunthorpe
2024-08-28 6:27 ` Vasant Hegde
2024-08-21 17:37 ` [PATCH 06/14] iommu/amd: Rename struct amd_io_pgtable iopt to pgtbl Jason Gunthorpe
` (8 subsequent siblings)
13 siblings, 1 reply; 32+ messages in thread
From: Jason Gunthorpe @ 2024-08-21 17:37 UTC (permalink / raw)
To: iommu, Joerg Roedel, Robin Murphy, Suravee Suthikulpanit,
Will Deacon
Cc: Alejandro Jimenez, Joao Martins, Joerg Roedel, patches,
Vasant Hegde
Looks like many refactorings here have left this confused. There is only
one storage of the root/mode, it is in the iop struct.
increase_address_space() calls amd_iommu_domain_set_pgtable() with values
that it already stored in iop a few lines above.
amd_iommu_domain_clr_pt_root() is zero'ing memory we are about to free. It
used to protect agains a double free of root, but that is gone now.
Remove amd_iommu_domain_set_pgtable(), amd_iommu_domain_set_pt_root(),
amd_iommu_domain_clr_pt_root() as they are all pointless.
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
---
drivers/iommu/amd/amd_iommu.h | 13 -------------
drivers/iommu/amd/io_pgtable.c | 24 ------------------------
2 files changed, 37 deletions(-)
diff --git a/drivers/iommu/amd/amd_iommu.h b/drivers/iommu/amd/amd_iommu.h
index 2d5945c982bde5..5a050080d2e814 100644
--- a/drivers/iommu/amd/amd_iommu.h
+++ b/drivers/iommu/amd/amd_iommu.h
@@ -143,19 +143,6 @@ static inline void *iommu_phys_to_virt(unsigned long paddr)
return phys_to_virt(__sme_clr(paddr));
}
-static inline
-void amd_iommu_domain_set_pt_root(struct protection_domain *domain, u64 root)
-{
- domain->iop.root = (u64 *)(root & PAGE_MASK);
- domain->iop.mode = root & 7; /* lowest 3 bits encode pgtable mode */
-}
-
-static inline
-void amd_iommu_domain_clr_pt_root(struct protection_domain *domain)
-{
- amd_iommu_domain_set_pt_root(domain, 0);
-}
-
static inline int get_pci_sbdf_id(struct pci_dev *pdev)
{
int seg = pci_domain_nr(pdev->bus);
diff --git a/drivers/iommu/amd/io_pgtable.c b/drivers/iommu/amd/io_pgtable.c
index b3991ad1ae8ea3..5278ba3f676c45 100644
--- a/drivers/iommu/amd/io_pgtable.c
+++ b/drivers/iommu/amd/io_pgtable.c
@@ -132,18 +132,6 @@ static void free_sub_pt(u64 *root, int mode, struct list_head *freelist)
}
}
-void amd_iommu_domain_set_pgtable(struct protection_domain *domain,
- u64 *root, int mode)
-{
- u64 pt_root;
-
- /* lowest 3 bits encode pgtable mode */
- pt_root = mode & 7;
- pt_root |= (u64)root;
-
- amd_iommu_domain_set_pt_root(domain, pt_root);
-}
-
/*
* This function is used to add another level to an IO page table. Adding
* another level increases the size of the address space by 9 bits to a size up
@@ -177,12 +165,6 @@ static bool increase_address_space(struct protection_domain *domain,
amd_iommu_update_and_flush_device_table(domain);
amd_iommu_domain_flush_complete(domain);
- /*
- * Device Table needs to be updated and flushed before the new root can
- * be published.
- */
- amd_iommu_domain_set_pgtable(domain, pte, domain->iop.mode);
-
pte = NULL;
ret = true;
@@ -561,23 +543,17 @@ static int iommu_v1_read_and_clear_dirty(struct io_pgtable_ops *ops,
static void v1_free_pgtable(struct io_pgtable *iop)
{
struct amd_io_pgtable *pgtable = container_of(iop, struct amd_io_pgtable, iop);
- struct protection_domain *dom;
LIST_HEAD(freelist);
if (pgtable->mode == PAGE_MODE_NONE)
return;
- dom = container_of(pgtable, struct protection_domain, iop);
-
/* Page-table is not visible to IOMMU anymore, so free it */
BUG_ON(pgtable->mode < PAGE_MODE_NONE ||
pgtable->mode > PAGE_MODE_6_LEVEL);
free_sub_pt(pgtable->root, pgtable->mode, &freelist);
iommu_put_pages_list(&freelist);
-
- /* Update data structure */
- amd_iommu_domain_clr_pt_root(dom);
}
static struct io_pgtable *v1_alloc_pgtable(struct io_pgtable_cfg *cfg, void *cookie)
--
2.46.0
^ permalink raw reply related [flat|nested] 32+ messages in thread
* [PATCH 06/14] iommu/amd: Rename struct amd_io_pgtable iopt to pgtbl
2024-08-21 17:37 [PATCH 00/14] Minor fixups and refactorings for AMD's io-pgtable code Jason Gunthorpe
` (4 preceding siblings ...)
2024-08-21 17:37 ` [PATCH 05/14] iommu/amd: Remove the amd_iommu_domain_set_pt_root() and related Jason Gunthorpe
@ 2024-08-21 17:37 ` Jason Gunthorpe
2024-08-28 6:28 ` Vasant Hegde
2024-08-21 17:37 ` [PATCH 07/14] iommu/amd: Remove amd_io_pgtable::pgtbl_cfg Jason Gunthorpe
` (7 subsequent siblings)
13 siblings, 1 reply; 32+ messages in thread
From: Jason Gunthorpe @ 2024-08-21 17:37 UTC (permalink / raw)
To: iommu, Joerg Roedel, Robin Murphy, Suravee Suthikulpanit,
Will Deacon
Cc: Alejandro Jimenez, Joao Martins, Joerg Roedel, patches,
Vasant Hegde
There is struct protection_domain iopt and struct amd_io_pgtable iopt.
Next patches are going to want to write domain.iopt.iopt.xx which is quite
unnatural to read.
Give one of them a different name, amd_io_pgtable has fewer references so
call it pgtbl, to match pgtbl_cfg, instead.
Suggested-by: Alejandro Jimenez <alejandro.j.jimenez@oracle.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
---
drivers/iommu/amd/amd_iommu_types.h | 4 ++--
drivers/iommu/amd/io_pgtable.c | 12 ++++++------
drivers/iommu/amd/io_pgtable_v2.c | 14 +++++++-------
drivers/iommu/amd/iommu.c | 14 +++++++-------
4 files changed, 22 insertions(+), 22 deletions(-)
diff --git a/drivers/iommu/amd/amd_iommu_types.h b/drivers/iommu/amd/amd_iommu_types.h
index 2b76b5dedc1d9b..90a2e4790bffdf 100644
--- a/drivers/iommu/amd/amd_iommu_types.h
+++ b/drivers/iommu/amd/amd_iommu_types.h
@@ -527,7 +527,7 @@ struct amd_irte_ops;
#define AMD_IOMMU_FLAG_TRANS_PRE_ENABLED (1 << 0)
#define io_pgtable_to_data(x) \
- container_of((x), struct amd_io_pgtable, iop)
+ container_of((x), struct amd_io_pgtable, pgtbl)
#define io_pgtable_ops_to_data(x) \
io_pgtable_to_data(io_pgtable_ops_to_pgtable(x))
@@ -548,7 +548,7 @@ struct gcr3_tbl_info {
struct amd_io_pgtable {
struct io_pgtable_cfg pgtbl_cfg;
- struct io_pgtable iop;
+ struct io_pgtable pgtbl;
int mode;
u64 *root;
u64 *pgd; /* v2 pgtable pgd pointer */
diff --git a/drivers/iommu/amd/io_pgtable.c b/drivers/iommu/amd/io_pgtable.c
index 5278ba3f676c45..dab1cf53b1f3ff 100644
--- a/drivers/iommu/amd/io_pgtable.c
+++ b/drivers/iommu/amd/io_pgtable.c
@@ -542,7 +542,7 @@ static int iommu_v1_read_and_clear_dirty(struct io_pgtable_ops *ops,
*/
static void v1_free_pgtable(struct io_pgtable *iop)
{
- struct amd_io_pgtable *pgtable = container_of(iop, struct amd_io_pgtable, iop);
+ struct amd_io_pgtable *pgtable = container_of(iop, struct amd_io_pgtable, pgtbl);
LIST_HEAD(freelist);
if (pgtable->mode == PAGE_MODE_NONE)
@@ -570,12 +570,12 @@ static struct io_pgtable *v1_alloc_pgtable(struct io_pgtable_cfg *cfg, void *coo
cfg->oas = IOMMU_OUT_ADDR_BIT_SIZE;
cfg->tlb = &v1_flush_ops;
- pgtable->iop.ops.map_pages = iommu_v1_map_pages;
- pgtable->iop.ops.unmap_pages = iommu_v1_unmap_pages;
- pgtable->iop.ops.iova_to_phys = iommu_v1_iova_to_phys;
- pgtable->iop.ops.read_and_clear_dirty = iommu_v1_read_and_clear_dirty;
+ pgtable->pgtbl.ops.map_pages = iommu_v1_map_pages;
+ pgtable->pgtbl.ops.unmap_pages = iommu_v1_unmap_pages;
+ pgtable->pgtbl.ops.iova_to_phys = iommu_v1_iova_to_phys;
+ pgtable->pgtbl.ops.read_and_clear_dirty = iommu_v1_read_and_clear_dirty;
- return &pgtable->iop;
+ return &pgtable->pgtbl;
}
struct io_pgtable_init_fns io_pgtable_amd_iommu_v1_init_fns = {
diff --git a/drivers/iommu/amd/io_pgtable_v2.c b/drivers/iommu/amd/io_pgtable_v2.c
index 6088822180e1e4..45a6bc3326397f 100644
--- a/drivers/iommu/amd/io_pgtable_v2.c
+++ b/drivers/iommu/amd/io_pgtable_v2.c
@@ -234,7 +234,7 @@ static int iommu_v2_map_pages(struct io_pgtable_ops *ops, unsigned long iova,
int prot, gfp_t gfp, size_t *mapped)
{
struct protection_domain *pdom = io_pgtable_ops_to_domain(ops);
- struct io_pgtable_cfg *cfg = &pdom->iop.iop.cfg;
+ struct io_pgtable_cfg *cfg = &pdom->iop.pgtbl.cfg;
u64 *pte;
unsigned long map_size;
unsigned long mapped_size = 0;
@@ -281,7 +281,7 @@ static unsigned long iommu_v2_unmap_pages(struct io_pgtable_ops *ops,
struct iommu_iotlb_gather *gather)
{
struct amd_io_pgtable *pgtable = io_pgtable_ops_to_data(ops);
- struct io_pgtable_cfg *cfg = &pgtable->iop.cfg;
+ struct io_pgtable_cfg *cfg = &pgtable->pgtbl.cfg;
unsigned long unmap_size;
unsigned long unmapped = 0;
size_t size = pgcount << __ffs(pgsize);
@@ -346,7 +346,7 @@ static const struct iommu_flush_ops v2_flush_ops = {
static void v2_free_pgtable(struct io_pgtable *iop)
{
- struct amd_io_pgtable *pgtable = container_of(iop, struct amd_io_pgtable, iop);
+ struct amd_io_pgtable *pgtable = container_of(iop, struct amd_io_pgtable, pgtbl);
if (!pgtable || !pgtable->pgd)
return;
@@ -369,16 +369,16 @@ static struct io_pgtable *v2_alloc_pgtable(struct io_pgtable_cfg *cfg, void *coo
if (get_pgtable_level() == PAGE_MODE_5_LEVEL)
ias = 57;
- pgtable->iop.ops.map_pages = iommu_v2_map_pages;
- pgtable->iop.ops.unmap_pages = iommu_v2_unmap_pages;
- pgtable->iop.ops.iova_to_phys = iommu_v2_iova_to_phys;
+ pgtable->pgtbl.ops.map_pages = iommu_v2_map_pages;
+ pgtable->pgtbl.ops.unmap_pages = iommu_v2_unmap_pages;
+ pgtable->pgtbl.ops.iova_to_phys = iommu_v2_iova_to_phys;
cfg->pgsize_bitmap = AMD_IOMMU_PGSIZES_V2,
cfg->ias = ias,
cfg->oas = IOMMU_OUT_ADDR_BIT_SIZE,
cfg->tlb = &v2_flush_ops;
- return &pgtable->iop;
+ return &pgtable->pgtbl;
}
struct io_pgtable_init_fns io_pgtable_amd_iommu_v2_init_fns = {
diff --git a/drivers/iommu/amd/iommu.c b/drivers/iommu/amd/iommu.c
index 426aecacc63009..e3e935b0fdf637 100644
--- a/drivers/iommu/amd/iommu.c
+++ b/drivers/iommu/amd/iommu.c
@@ -2263,7 +2263,7 @@ void protection_domain_free(struct protection_domain *domain)
WARN_ON(!list_empty(&domain->dev_list));
if (domain->iop.pgtbl_cfg.tlb)
- free_io_pgtable_ops(&domain->iop.iop.ops);
+ free_io_pgtable_ops(&domain->iop.pgtbl.ops);
if (domain->id)
domain_id_free(domain->id);
@@ -2371,7 +2371,7 @@ static struct iommu_domain *do_iommu_domain_alloc(unsigned int type,
domain->domain.geometry.aperture_start = 0;
domain->domain.geometry.aperture_end = dma_max_address();
domain->domain.geometry.force_aperture = true;
- domain->domain.pgsize_bitmap = domain->iop.iop.cfg.pgsize_bitmap;
+ domain->domain.pgsize_bitmap = domain->iop.pgtbl.cfg.pgsize_bitmap;
if (iommu) {
domain->domain.type = type;
@@ -2492,7 +2492,7 @@ static int amd_iommu_iotlb_sync_map(struct iommu_domain *dom,
unsigned long iova, size_t size)
{
struct protection_domain *domain = to_pdomain(dom);
- struct io_pgtable_ops *ops = &domain->iop.iop.ops;
+ struct io_pgtable_ops *ops = &domain->iop.pgtbl.ops;
if (ops->map_pages)
domain_flush_np_cache(domain, iova, size);
@@ -2504,7 +2504,7 @@ static int amd_iommu_map_pages(struct iommu_domain *dom, unsigned long iova,
int iommu_prot, gfp_t gfp, size_t *mapped)
{
struct protection_domain *domain = to_pdomain(dom);
- struct io_pgtable_ops *ops = &domain->iop.iop.ops;
+ struct io_pgtable_ops *ops = &domain->iop.pgtbl.ops;
int prot = 0;
int ret = -EINVAL;
@@ -2551,7 +2551,7 @@ static size_t amd_iommu_unmap_pages(struct iommu_domain *dom, unsigned long iova
struct iommu_iotlb_gather *gather)
{
struct protection_domain *domain = to_pdomain(dom);
- struct io_pgtable_ops *ops = &domain->iop.iop.ops;
+ struct io_pgtable_ops *ops = &domain->iop.pgtbl.ops;
size_t r;
if ((domain->pd_mode == PD_MODE_V1) &&
@@ -2570,7 +2570,7 @@ static phys_addr_t amd_iommu_iova_to_phys(struct iommu_domain *dom,
dma_addr_t iova)
{
struct protection_domain *domain = to_pdomain(dom);
- struct io_pgtable_ops *ops = &domain->iop.iop.ops;
+ struct io_pgtable_ops *ops = &domain->iop.pgtbl.ops;
return ops->iova_to_phys(ops, iova);
}
@@ -2648,7 +2648,7 @@ static int amd_iommu_read_and_clear_dirty(struct iommu_domain *domain,
struct iommu_dirty_bitmap *dirty)
{
struct protection_domain *pdomain = to_pdomain(domain);
- struct io_pgtable_ops *ops = &pdomain->iop.iop.ops;
+ struct io_pgtable_ops *ops = &pdomain->iop.pgtbl.ops;
unsigned long lflags;
if (!ops || !ops->read_and_clear_dirty)
--
2.46.0
^ permalink raw reply related [flat|nested] 32+ messages in thread
* [PATCH 07/14] iommu/amd: Remove amd_io_pgtable::pgtbl_cfg
2024-08-21 17:37 [PATCH 00/14] Minor fixups and refactorings for AMD's io-pgtable code Jason Gunthorpe
` (5 preceding siblings ...)
2024-08-21 17:37 ` [PATCH 06/14] iommu/amd: Rename struct amd_io_pgtable iopt to pgtbl Jason Gunthorpe
@ 2024-08-21 17:37 ` Jason Gunthorpe
2024-08-28 6:37 ` Vasant Hegde
2024-08-21 17:37 ` [PATCH 08/14] iommu/amd: Store the nid in io_pgtable_cfg instead of the domain Jason Gunthorpe
` (6 subsequent siblings)
13 siblings, 1 reply; 32+ messages in thread
From: Jason Gunthorpe @ 2024-08-21 17:37 UTC (permalink / raw)
To: iommu, Joerg Roedel, Robin Murphy, Suravee Suthikulpanit,
Will Deacon
Cc: Alejandro Jimenez, Joao Martins, Joerg Roedel, patches,
Vasant Hegde
This struct is already in iop.cfg, we don't need two.
AMD is using this API sort of wrong, the cfg is supposed to be passed in
and then the allocation function will allocate ops memory and copy the
passed config into the new memory. Keep it kind of wrong and pass in the
cfg memory that is already part of the pagetable struct.
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
---
drivers/iommu/amd/amd_iommu_types.h | 3 +--
drivers/iommu/amd/iommu.c | 5 +++--
2 files changed, 4 insertions(+), 4 deletions(-)
diff --git a/drivers/iommu/amd/amd_iommu_types.h b/drivers/iommu/amd/amd_iommu_types.h
index 90a2e4790bffdf..ef4c4887cbbbd5 100644
--- a/drivers/iommu/amd/amd_iommu_types.h
+++ b/drivers/iommu/amd/amd_iommu_types.h
@@ -537,7 +537,7 @@ struct amd_irte_ops;
struct protection_domain, iop)
#define io_pgtable_cfg_to_data(x) \
- container_of((x), struct amd_io_pgtable, pgtbl_cfg)
+ container_of((x), struct amd_io_pgtable, pgtbl.cfg)
struct gcr3_tbl_info {
u64 *gcr3_tbl; /* Guest CR3 table */
@@ -547,7 +547,6 @@ struct gcr3_tbl_info {
};
struct amd_io_pgtable {
- struct io_pgtable_cfg pgtbl_cfg;
struct io_pgtable pgtbl;
int mode;
u64 *root;
diff --git a/drivers/iommu/amd/iommu.c b/drivers/iommu/amd/iommu.c
index e3e935b0fdf637..3840b3ede65d32 100644
--- a/drivers/iommu/amd/iommu.c
+++ b/drivers/iommu/amd/iommu.c
@@ -2262,7 +2262,7 @@ void protection_domain_free(struct protection_domain *domain)
WARN_ON(!list_empty(&domain->dev_list));
- if (domain->iop.pgtbl_cfg.tlb)
+ if (domain->iop.pgtbl.cfg.tlb)
free_io_pgtable_ops(&domain->iop.pgtbl.ops);
if (domain->id)
@@ -2320,7 +2320,8 @@ struct protection_domain *protection_domain_alloc(unsigned int type)
goto out_err;
}
- pgtbl_ops = alloc_io_pgtable_ops(pgtable, &domain->iop.pgtbl_cfg, domain);
+ pgtbl_ops =
+ alloc_io_pgtable_ops(pgtable, &domain->iop.pgtbl.cfg, domain);
if (!pgtbl_ops)
goto out_err;
--
2.46.0
^ permalink raw reply related [flat|nested] 32+ messages in thread
* [PATCH 08/14] iommu/amd: Store the nid in io_pgtable_cfg instead of the domain
2024-08-21 17:37 [PATCH 00/14] Minor fixups and refactorings for AMD's io-pgtable code Jason Gunthorpe
` (6 preceding siblings ...)
2024-08-21 17:37 ` [PATCH 07/14] iommu/amd: Remove amd_io_pgtable::pgtbl_cfg Jason Gunthorpe
@ 2024-08-21 17:37 ` Jason Gunthorpe
2024-08-28 6:39 ` Vasant Hegde
2024-08-21 17:37 ` [PATCH 09/14] iommu/amd: Narrow the use of struct protection_domain to invalidation Jason Gunthorpe
` (5 subsequent siblings)
13 siblings, 1 reply; 32+ messages in thread
From: Jason Gunthorpe @ 2024-08-21 17:37 UTC (permalink / raw)
To: iommu, Joerg Roedel, Robin Murphy, Suravee Suthikulpanit,
Will Deacon
Cc: Alejandro Jimenez, Joao Martins, Joerg Roedel, patches,
Vasant Hegde
We already have memory in the union here that is being wasted in AMD's
case, use it to store the nid.
Putting the nid here further isolates the io_pgtable code from the struct
protection_domain.
Fixup protection_domain_alloc so that the NID from the device is provided,
at this point dev is never NULL for AMD so this will now allocate the
first table pointer on the correct NUMA node.
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
---
drivers/iommu/amd/amd_iommu.h | 2 +-
drivers/iommu/amd/amd_iommu_types.h | 1 -
drivers/iommu/amd/io_pgtable.c | 7 ++++---
drivers/iommu/amd/io_pgtable_v2.c | 5 ++---
drivers/iommu/amd/iommu.c | 11 ++++++-----
drivers/iommu/amd/pasid.c | 2 +-
include/linux/io-pgtable.h | 4 ++++
7 files changed, 18 insertions(+), 14 deletions(-)
diff --git a/drivers/iommu/amd/amd_iommu.h b/drivers/iommu/amd/amd_iommu.h
index 5a050080d2e814..5459f726fb29d6 100644
--- a/drivers/iommu/amd/amd_iommu.h
+++ b/drivers/iommu/amd/amd_iommu.h
@@ -45,7 +45,7 @@ extern enum io_pgtable_fmt amd_iommu_pgtable;
extern int amd_iommu_gpt_level;
/* Protection domain ops */
-struct protection_domain *protection_domain_alloc(unsigned int type);
+struct protection_domain *protection_domain_alloc(unsigned int type, int nid);
void protection_domain_free(struct protection_domain *domain);
struct iommu_domain *amd_iommu_domain_alloc_sva(struct device *dev,
struct mm_struct *mm);
diff --git a/drivers/iommu/amd/amd_iommu_types.h b/drivers/iommu/amd/amd_iommu_types.h
index ef4c4887cbbbd5..74dc003f5b7815 100644
--- a/drivers/iommu/amd/amd_iommu_types.h
+++ b/drivers/iommu/amd/amd_iommu_types.h
@@ -579,7 +579,6 @@ struct protection_domain {
struct amd_io_pgtable iop;
spinlock_t lock; /* mostly used to lock the page table*/
u16 id; /* the domain id written to the device table */
- int nid; /* Node ID */
enum protection_domain_mode pd_mode; /* Track page table type */
bool dirty_tracking; /* dirty tracking is enabled in the domain */
unsigned dev_cnt; /* devices assigned to this domain */
diff --git a/drivers/iommu/amd/io_pgtable.c b/drivers/iommu/amd/io_pgtable.c
index dab1cf53b1f3ff..d56be842c0b71e 100644
--- a/drivers/iommu/amd/io_pgtable.c
+++ b/drivers/iommu/amd/io_pgtable.c
@@ -145,7 +145,7 @@ static bool increase_address_space(struct protection_domain *domain,
bool ret = true;
u64 *pte;
- pte = iommu_alloc_page_node(domain->nid, gfp);
+ pte = iommu_alloc_page_node(domain->iop.pgtbl.cfg.amd.nid, gfp);
if (!pte)
return false;
@@ -233,7 +233,8 @@ static u64 *alloc_pte(struct protection_domain *domain,
if (!IOMMU_PTE_PRESENT(__pte) ||
pte_level == PAGE_MODE_NONE) {
- page = iommu_alloc_page_node(domain->nid, gfp);
+ page = iommu_alloc_page_node(
+ domain->iop.pgtbl.cfg.amd.nid, gfp);
if (!page)
return NULL;
@@ -560,7 +561,7 @@ static struct io_pgtable *v1_alloc_pgtable(struct io_pgtable_cfg *cfg, void *coo
{
struct amd_io_pgtable *pgtable = io_pgtable_cfg_to_data(cfg);
- pgtable->root = iommu_alloc_page(GFP_KERNEL);
+ pgtable->root = iommu_alloc_page_node(cfg->amd.nid, GFP_KERNEL);
if (!pgtable->root)
return NULL;
pgtable->mode = PAGE_MODE_3_LEVEL;
diff --git a/drivers/iommu/amd/io_pgtable_v2.c b/drivers/iommu/amd/io_pgtable_v2.c
index 45a6bc3326397f..1e3be8c5312b87 100644
--- a/drivers/iommu/amd/io_pgtable_v2.c
+++ b/drivers/iommu/amd/io_pgtable_v2.c
@@ -251,7 +251,7 @@ static int iommu_v2_map_pages(struct io_pgtable_ops *ops, unsigned long iova,
while (mapped_size < size) {
map_size = get_alloc_page_size(pgsize);
- pte = v2_alloc_pte(pdom->nid, pdom->iop.pgd,
+ pte = v2_alloc_pte(cfg->amd.nid, pdom->iop.pgd,
iova, map_size, gfp, &updated);
if (!pte) {
ret = -EINVAL;
@@ -359,10 +359,9 @@ static void v2_free_pgtable(struct io_pgtable *iop)
static struct io_pgtable *v2_alloc_pgtable(struct io_pgtable_cfg *cfg, void *cookie)
{
struct amd_io_pgtable *pgtable = io_pgtable_cfg_to_data(cfg);
- struct protection_domain *pdom = (struct protection_domain *)cookie;
int ias = IOMMU_IN_ADDR_BIT_SIZE;
- pgtable->pgd = iommu_alloc_page_node(pdom->nid, GFP_KERNEL);
+ pgtable->pgd = iommu_alloc_page_node(cfg->amd.nid, GFP_KERNEL);
if (!pgtable->pgd)
return NULL;
diff --git a/drivers/iommu/amd/iommu.c b/drivers/iommu/amd/iommu.c
index 3840b3ede65d32..1cdd1675a01af2 100644
--- a/drivers/iommu/amd/iommu.c
+++ b/drivers/iommu/amd/iommu.c
@@ -2037,8 +2037,8 @@ static int do_attach(struct iommu_dev_data *dev_data,
list_add(&dev_data->list, &domain->dev_list);
/* Update NUMA Node ID */
- if (domain->nid == NUMA_NO_NODE)
- domain->nid = dev_to_node(dev_data->dev);
+ if (domain->iop.pgtbl.cfg.amd.nid == NUMA_NO_NODE)
+ domain->iop.pgtbl.cfg.amd.nid = dev_to_node(dev_data->dev);
/* Do reference counting */
domain->dev_iommu[iommu->index] += 1;
@@ -2271,7 +2271,7 @@ void protection_domain_free(struct protection_domain *domain)
kfree(domain);
}
-struct protection_domain *protection_domain_alloc(unsigned int type)
+struct protection_domain *protection_domain_alloc(unsigned int type, int nid)
{
struct io_pgtable_ops *pgtbl_ops;
struct protection_domain *domain;
@@ -2288,7 +2288,7 @@ struct protection_domain *protection_domain_alloc(unsigned int type)
spin_lock_init(&domain->lock);
INIT_LIST_HEAD(&domain->dev_list);
INIT_LIST_HEAD(&domain->dev_data_list);
- domain->nid = NUMA_NO_NODE;
+ domain->iop.pgtbl.cfg.amd.nid = nid;
switch (type) {
/* No need to allocate io pgtable ops in passthrough mode */
@@ -2365,7 +2365,8 @@ static struct iommu_domain *do_iommu_domain_alloc(unsigned int type,
if (dirty_tracking && !amd_iommu_hd_support(iommu))
return ERR_PTR(-EOPNOTSUPP);
- domain = protection_domain_alloc(type);
+ domain = protection_domain_alloc(type,
+ dev ? dev_to_node(dev) : NUMA_NO_NODE);
if (!domain)
return ERR_PTR(-ENOMEM);
diff --git a/drivers/iommu/amd/pasid.c b/drivers/iommu/amd/pasid.c
index a68215f2b3e1d6..0657b9373be547 100644
--- a/drivers/iommu/amd/pasid.c
+++ b/drivers/iommu/amd/pasid.c
@@ -181,7 +181,7 @@ struct iommu_domain *amd_iommu_domain_alloc_sva(struct device *dev,
struct protection_domain *pdom;
int ret;
- pdom = protection_domain_alloc(IOMMU_DOMAIN_SVA);
+ pdom = protection_domain_alloc(IOMMU_DOMAIN_SVA, dev_to_node(dev));
if (!pdom)
return ERR_PTR(-ENOMEM);
diff --git a/include/linux/io-pgtable.h b/include/linux/io-pgtable.h
index f9a81761bfceda..b1ecfc3cd5bcc0 100644
--- a/include/linux/io-pgtable.h
+++ b/include/linux/io-pgtable.h
@@ -171,6 +171,10 @@ struct io_pgtable_cfg {
u64 ttbr[4];
u32 n_ttbrs;
} apple_dart_cfg;
+
+ struct {
+ int nid;
+ } amd;
};
};
--
2.46.0
^ permalink raw reply related [flat|nested] 32+ messages in thread
* [PATCH 09/14] iommu/amd: Narrow the use of struct protection_domain to invalidation
2024-08-21 17:37 [PATCH 00/14] Minor fixups and refactorings for AMD's io-pgtable code Jason Gunthorpe
` (7 preceding siblings ...)
2024-08-21 17:37 ` [PATCH 08/14] iommu/amd: Store the nid in io_pgtable_cfg instead of the domain Jason Gunthorpe
@ 2024-08-21 17:37 ` Jason Gunthorpe
2024-08-21 17:37 ` [PATCH 10/14] iommu/amd: Remove conditions from domain free paths Jason Gunthorpe
` (4 subsequent siblings)
13 siblings, 0 replies; 32+ messages in thread
From: Jason Gunthorpe @ 2024-08-21 17:37 UTC (permalink / raw)
To: iommu, Joerg Roedel, Robin Murphy, Suravee Suthikulpanit,
Will Deacon
Cc: Alejandro Jimenez, Joao Martins, Joerg Roedel, patches,
Vasant Hegde
The AMD io_pgtable stuff doesn't implement the tlb ops callbacks, instead
it invokes the invalidation ops directly on the struct protection_domain.
Narrow the use of struct protection_domain to only those few code paths.
Make everything properly use struct amd_io_pgtable, which is the correct
modular type for this module.
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
---
drivers/iommu/amd/io_pgtable.c | 35 +++++++++++++++++--------------
drivers/iommu/amd/io_pgtable_v2.c | 11 ++++++----
2 files changed, 26 insertions(+), 20 deletions(-)
diff --git a/drivers/iommu/amd/io_pgtable.c b/drivers/iommu/amd/io_pgtable.c
index d56be842c0b71e..cb8f23e878028a 100644
--- a/drivers/iommu/amd/io_pgtable.c
+++ b/drivers/iommu/amd/io_pgtable.c
@@ -137,31 +137,33 @@ static void free_sub_pt(u64 *root, int mode, struct list_head *freelist)
* another level increases the size of the address space by 9 bits to a size up
* to 64 bits.
*/
-static bool increase_address_space(struct protection_domain *domain,
+static bool increase_address_space(struct amd_io_pgtable *pgtable,
unsigned long address,
gfp_t gfp)
{
+ struct protection_domain *domain =
+ container_of(pgtable, struct protection_domain, iop);
unsigned long flags;
bool ret = true;
u64 *pte;
- pte = iommu_alloc_page_node(domain->iop.pgtbl.cfg.amd.nid, gfp);
+ pte = iommu_alloc_page_node(pgtable->pgtbl.cfg.amd.nid, gfp);
if (!pte)
return false;
spin_lock_irqsave(&domain->lock, flags);
- if (address <= PM_LEVEL_SIZE(domain->iop.mode))
+ if (address <= PM_LEVEL_SIZE(pgtable->mode))
goto out;
ret = false;
- if (WARN_ON_ONCE(domain->iop.mode == PAGE_MODE_6_LEVEL))
+ if (WARN_ON_ONCE(pgtable->mode == PAGE_MODE_6_LEVEL))
goto out;
- *pte = PM_LEVEL_PDE(domain->iop.mode, iommu_virt_to_phys(domain->iop.root));
+ *pte = PM_LEVEL_PDE(pgtable->mode, iommu_virt_to_phys(pgtable->root));
- domain->iop.root = pte;
- domain->iop.mode += 1;
+ pgtable->root = pte;
+ pgtable->mode += 1;
amd_iommu_update_and_flush_device_table(domain);
amd_iommu_domain_flush_complete(domain);
@@ -175,7 +177,7 @@ static bool increase_address_space(struct protection_domain *domain,
return ret;
}
-static u64 *alloc_pte(struct protection_domain *domain,
+static u64 *alloc_pte(struct amd_io_pgtable *pgtable,
unsigned long address,
unsigned long page_size,
u64 **pte_page,
@@ -187,18 +189,18 @@ static u64 *alloc_pte(struct protection_domain *domain,
BUG_ON(!is_power_of_2(page_size));
- while (address > PM_LEVEL_SIZE(domain->iop.mode)) {
+ while (address > PM_LEVEL_SIZE(pgtable->mode)) {
/*
* Return an error if there is no memory to update the
* page-table.
*/
- if (!increase_address_space(domain, address, gfp))
+ if (!increase_address_space(pgtable, address, gfp))
return NULL;
}
- level = domain->iop.mode - 1;
- pte = &domain->iop.root[PM_LEVEL_INDEX(level, address)];
+ level = pgtable->mode - 1;
+ pte = &pgtable->root[PM_LEVEL_INDEX(level, address)];
address = PAGE_SIZE_ALIGN(address, page_size);
end_lvl = PAGE_SIZE_LEVEL(page_size);
@@ -233,8 +235,8 @@ static u64 *alloc_pte(struct protection_domain *domain,
if (!IOMMU_PTE_PRESENT(__pte) ||
pte_level == PAGE_MODE_NONE) {
- page = iommu_alloc_page_node(
- domain->iop.pgtbl.cfg.amd.nid, gfp);
+ page = iommu_alloc_page_node(pgtable->pgtbl.cfg.amd.nid,
+ gfp);
if (!page)
return NULL;
@@ -348,7 +350,7 @@ static int iommu_v1_map_pages(struct io_pgtable_ops *ops, unsigned long iova,
phys_addr_t paddr, size_t pgsize, size_t pgcount,
int prot, gfp_t gfp, size_t *mapped)
{
- struct protection_domain *dom = io_pgtable_ops_to_domain(ops);
+ struct amd_io_pgtable *pgtable = io_pgtable_ops_to_data(ops);
LIST_HEAD(freelist);
bool updated = false;
u64 __pte, *pte;
@@ -365,7 +367,7 @@ static int iommu_v1_map_pages(struct io_pgtable_ops *ops, unsigned long iova,
while (pgcount > 0) {
count = PAGE_SIZE_PTE_COUNT(pgsize);
- pte = alloc_pte(dom, iova, pgsize, NULL, gfp, &updated);
+ pte = alloc_pte(pgtable, iova, pgsize, NULL, gfp, &updated);
ret = -ENOMEM;
if (!pte)
@@ -402,6 +404,7 @@ static int iommu_v1_map_pages(struct io_pgtable_ops *ops, unsigned long iova,
out:
if (updated) {
+ struct protection_domain *dom = io_pgtable_ops_to_domain(ops);
unsigned long flags;
spin_lock_irqsave(&dom->lock, flags);
diff --git a/drivers/iommu/amd/io_pgtable_v2.c b/drivers/iommu/amd/io_pgtable_v2.c
index 1e3be8c5312b87..ed2c1faae6d580 100644
--- a/drivers/iommu/amd/io_pgtable_v2.c
+++ b/drivers/iommu/amd/io_pgtable_v2.c
@@ -233,8 +233,8 @@ static int iommu_v2_map_pages(struct io_pgtable_ops *ops, unsigned long iova,
phys_addr_t paddr, size_t pgsize, size_t pgcount,
int prot, gfp_t gfp, size_t *mapped)
{
- struct protection_domain *pdom = io_pgtable_ops_to_domain(ops);
- struct io_pgtable_cfg *cfg = &pdom->iop.pgtbl.cfg;
+ struct amd_io_pgtable *pgtable = io_pgtable_ops_to_data(ops);
+ struct io_pgtable_cfg *cfg = &pgtable->pgtbl.cfg;
u64 *pte;
unsigned long map_size;
unsigned long mapped_size = 0;
@@ -251,7 +251,7 @@ static int iommu_v2_map_pages(struct io_pgtable_ops *ops, unsigned long iova,
while (mapped_size < size) {
map_size = get_alloc_page_size(pgsize);
- pte = v2_alloc_pte(cfg->amd.nid, pdom->iop.pgd,
+ pte = v2_alloc_pte(cfg->amd.nid, pgtable->pgd,
iova, map_size, gfp, &updated);
if (!pte) {
ret = -EINVAL;
@@ -266,8 +266,11 @@ static int iommu_v2_map_pages(struct io_pgtable_ops *ops, unsigned long iova,
}
out:
- if (updated)
+ if (updated) {
+ struct protection_domain *pdom = io_pgtable_ops_to_domain(ops);
+
amd_iommu_domain_flush_pages(pdom, o_iova, size);
+ }
if (mapped)
*mapped += mapped_size;
--
2.46.0
^ permalink raw reply related [flat|nested] 32+ messages in thread
* [PATCH 10/14] iommu/amd: Remove conditions from domain free paths
2024-08-21 17:37 [PATCH 00/14] Minor fixups and refactorings for AMD's io-pgtable code Jason Gunthorpe
` (8 preceding siblings ...)
2024-08-21 17:37 ` [PATCH 09/14] iommu/amd: Narrow the use of struct protection_domain to invalidation Jason Gunthorpe
@ 2024-08-21 17:37 ` Jason Gunthorpe
2024-08-28 6:56 ` Vasant Hegde
2024-08-21 17:37 ` [PATCH 11/14] iommu/amd: Fix typo of , instead of ; Jason Gunthorpe
` (3 subsequent siblings)
13 siblings, 1 reply; 32+ messages in thread
From: Jason Gunthorpe @ 2024-08-21 17:37 UTC (permalink / raw)
To: iommu, Joerg Roedel, Robin Murphy, Suravee Suthikulpanit,
Will Deacon
Cc: Alejandro Jimenez, Joao Martins, Joerg Roedel, patches,
Vasant Hegde
Don't use tlb as some flag to indicate if protection_domain_alloc()
completed. Have protection_domain_alloc() unwind itself in the normal
kernel style and require protection_domain_free() only be called on
successful results of protection_domain_alloc().
Also, the amd_iommu_domain_free() op is never called by the core code with
a NULL argument, so remove all the NULL tests as well.
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
---
drivers/iommu/amd/iommu.c | 29 ++++++++++-------------------
1 file changed, 10 insertions(+), 19 deletions(-)
diff --git a/drivers/iommu/amd/iommu.c b/drivers/iommu/amd/iommu.c
index 1cdd1675a01af2..1e62e880131940 100644
--- a/drivers/iommu/amd/iommu.c
+++ b/drivers/iommu/amd/iommu.c
@@ -2257,17 +2257,9 @@ static void cleanup_domain(struct protection_domain *domain)
void protection_domain_free(struct protection_domain *domain)
{
- if (!domain)
- return;
-
WARN_ON(!list_empty(&domain->dev_list));
-
- if (domain->iop.pgtbl.cfg.tlb)
- free_io_pgtable_ops(&domain->iop.pgtbl.ops);
-
- if (domain->id)
- domain_id_free(domain->id);
-
+ free_io_pgtable_ops(&domain->iop.pgtbl.ops);
+ domain_id_free(domain->id);
kfree(domain);
}
@@ -2283,7 +2275,7 @@ struct protection_domain *protection_domain_alloc(unsigned int type, int nid)
domain->id = domain_id_alloc();
if (!domain->id)
- goto out_err;
+ goto err_free;
spin_lock_init(&domain->lock);
INIT_LIST_HEAD(&domain->dev_list);
@@ -2306,7 +2298,7 @@ struct protection_domain *protection_domain_alloc(unsigned int type, int nid)
pgtable = AMD_IOMMU_V1;
break;
default:
- goto out_err;
+ goto err_id;
}
switch (pgtable) {
@@ -2317,17 +2309,19 @@ struct protection_domain *protection_domain_alloc(unsigned int type, int nid)
domain->pd_mode = PD_MODE_V2;
break;
default:
- goto out_err;
+ goto err_id;
}
pgtbl_ops =
alloc_io_pgtable_ops(pgtable, &domain->iop.pgtbl.cfg, domain);
if (!pgtbl_ops)
- goto out_err;
+ goto err_id;
return domain;
-out_err:
- protection_domain_free(domain);
+err_id:
+ domain_id_free(domain->id);
+err_free:
+ kfree(domain);
return NULL;
}
@@ -2416,9 +2410,6 @@ void amd_iommu_domain_free(struct iommu_domain *dom)
struct protection_domain *domain;
unsigned long flags;
- if (!dom)
- return;
-
domain = to_pdomain(dom);
spin_lock_irqsave(&domain->lock, flags);
--
2.46.0
^ permalink raw reply related [flat|nested] 32+ messages in thread
* [PATCH 11/14] iommu/amd: Fix typo of , instead of ;
2024-08-21 17:37 [PATCH 00/14] Minor fixups and refactorings for AMD's io-pgtable code Jason Gunthorpe
` (9 preceding siblings ...)
2024-08-21 17:37 ` [PATCH 10/14] iommu/amd: Remove conditions from domain free paths Jason Gunthorpe
@ 2024-08-21 17:37 ` Jason Gunthorpe
2024-08-28 6:39 ` Vasant Hegde
2024-08-21 17:37 ` [PATCH 12/14] iommu/amd: Remove the confusing dummy iommu_flush_ops tlb ops Jason Gunthorpe
` (2 subsequent siblings)
13 siblings, 1 reply; 32+ messages in thread
From: Jason Gunthorpe @ 2024-08-21 17:37 UTC (permalink / raw)
To: iommu, Joerg Roedel, Robin Murphy, Suravee Suthikulpanit,
Will Deacon
Cc: Alejandro Jimenez, Joao Martins, Joerg Roedel, patches,
Vasant Hegde
Generates the same code, but is not the expected C style.
Fixes: aaac38f61487 ("iommu/amd: Initial support for AMD IOMMU v2 page table")
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
---
drivers/iommu/amd/io_pgtable_v2.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/drivers/iommu/amd/io_pgtable_v2.c b/drivers/iommu/amd/io_pgtable_v2.c
index ed2c1faae6d580..910fe1879f3e41 100644
--- a/drivers/iommu/amd/io_pgtable_v2.c
+++ b/drivers/iommu/amd/io_pgtable_v2.c
@@ -375,9 +375,9 @@ static struct io_pgtable *v2_alloc_pgtable(struct io_pgtable_cfg *cfg, void *coo
pgtable->pgtbl.ops.unmap_pages = iommu_v2_unmap_pages;
pgtable->pgtbl.ops.iova_to_phys = iommu_v2_iova_to_phys;
- cfg->pgsize_bitmap = AMD_IOMMU_PGSIZES_V2,
- cfg->ias = ias,
- cfg->oas = IOMMU_OUT_ADDR_BIT_SIZE,
+ cfg->pgsize_bitmap = AMD_IOMMU_PGSIZES_V2;
+ cfg->ias = ias;
+ cfg->oas = IOMMU_OUT_ADDR_BIT_SIZE;
cfg->tlb = &v2_flush_ops;
return &pgtable->pgtbl;
--
2.46.0
^ permalink raw reply related [flat|nested] 32+ messages in thread
* [PATCH 12/14] iommu/amd: Remove the confusing dummy iommu_flush_ops tlb ops
2024-08-21 17:37 [PATCH 00/14] Minor fixups and refactorings for AMD's io-pgtable code Jason Gunthorpe
` (10 preceding siblings ...)
2024-08-21 17:37 ` [PATCH 11/14] iommu/amd: Fix typo of , instead of ; Jason Gunthorpe
@ 2024-08-21 17:37 ` Jason Gunthorpe
2024-08-28 6:42 ` Vasant Hegde
2024-08-21 17:37 ` [PATCH 13/14] iommu/amd: Correct the reported page sizes from the V1 table Jason Gunthorpe
2024-08-21 17:37 ` [PATCH 14/14] iommu/amd: Do not set the D bit on AMD v2 table entries Jason Gunthorpe
13 siblings, 1 reply; 32+ messages in thread
From: Jason Gunthorpe @ 2024-08-21 17:37 UTC (permalink / raw)
To: iommu, Joerg Roedel, Robin Murphy, Suravee Suthikulpanit,
Will Deacon
Cc: Alejandro Jimenez, Joao Martins, Joerg Roedel, patches,
Vasant Hegde
The iommu driver is supposed to provide these ops to its io_pgtable
implementation so that it can hook the invalidations and do the right
thing.
They are called by wrapper functions like io_pgtable_tlb_add_page() etc,
which the AMD code never calls.
Instead it directly calls the AMD IOMMU invalidation functions by casting
to the struct protection_domain. Remove it all.
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
---
drivers/iommu/amd/io_pgtable.c | 22 ----------------------
drivers/iommu/amd/io_pgtable_v2.c | 22 ----------------------
2 files changed, 44 deletions(-)
diff --git a/drivers/iommu/amd/io_pgtable.c b/drivers/iommu/amd/io_pgtable.c
index cb8f23e878028a..aaf7858a0a421d 100644
--- a/drivers/iommu/amd/io_pgtable.c
+++ b/drivers/iommu/amd/io_pgtable.c
@@ -24,27 +24,6 @@
#include "amd_iommu.h"
#include "../iommu-pages.h"
-static void v1_tlb_flush_all(void *cookie)
-{
-}
-
-static void v1_tlb_flush_walk(unsigned long iova, size_t size,
- size_t granule, void *cookie)
-{
-}
-
-static void v1_tlb_add_page(struct iommu_iotlb_gather *gather,
- unsigned long iova, size_t granule,
- void *cookie)
-{
-}
-
-static const struct iommu_flush_ops v1_flush_ops = {
- .tlb_flush_all = v1_tlb_flush_all,
- .tlb_flush_walk = v1_tlb_flush_walk,
- .tlb_add_page = v1_tlb_add_page,
-};
-
/*
* Helper function to get the first pte of a large mapping
*/
@@ -572,7 +551,6 @@ static struct io_pgtable *v1_alloc_pgtable(struct io_pgtable_cfg *cfg, void *coo
cfg->pgsize_bitmap = AMD_IOMMU_PGSIZES;
cfg->ias = IOMMU_IN_ADDR_BIT_SIZE;
cfg->oas = IOMMU_OUT_ADDR_BIT_SIZE;
- cfg->tlb = &v1_flush_ops;
pgtable->pgtbl.ops.map_pages = iommu_v1_map_pages;
pgtable->pgtbl.ops.unmap_pages = iommu_v1_unmap_pages;
diff --git a/drivers/iommu/amd/io_pgtable_v2.c b/drivers/iommu/amd/io_pgtable_v2.c
index 910fe1879f3e41..77cc1b4a3f0225 100644
--- a/drivers/iommu/amd/io_pgtable_v2.c
+++ b/drivers/iommu/amd/io_pgtable_v2.c
@@ -326,27 +326,6 @@ static phys_addr_t iommu_v2_iova_to_phys(struct io_pgtable_ops *ops, unsigned lo
/*
* ----------------------------------------------------
*/
-static void v2_tlb_flush_all(void *cookie)
-{
-}
-
-static void v2_tlb_flush_walk(unsigned long iova, size_t size,
- size_t granule, void *cookie)
-{
-}
-
-static void v2_tlb_add_page(struct iommu_iotlb_gather *gather,
- unsigned long iova, size_t granule,
- void *cookie)
-{
-}
-
-static const struct iommu_flush_ops v2_flush_ops = {
- .tlb_flush_all = v2_tlb_flush_all,
- .tlb_flush_walk = v2_tlb_flush_walk,
- .tlb_add_page = v2_tlb_add_page,
-};
-
static void v2_free_pgtable(struct io_pgtable *iop)
{
struct amd_io_pgtable *pgtable = container_of(iop, struct amd_io_pgtable, pgtbl);
@@ -378,7 +357,6 @@ static struct io_pgtable *v2_alloc_pgtable(struct io_pgtable_cfg *cfg, void *coo
cfg->pgsize_bitmap = AMD_IOMMU_PGSIZES_V2;
cfg->ias = ias;
cfg->oas = IOMMU_OUT_ADDR_BIT_SIZE;
- cfg->tlb = &v2_flush_ops;
return &pgtable->pgtbl;
}
--
2.46.0
^ permalink raw reply related [flat|nested] 32+ messages in thread
* [PATCH 13/14] iommu/amd: Correct the reported page sizes from the V1 table
2024-08-21 17:37 [PATCH 00/14] Minor fixups and refactorings for AMD's io-pgtable code Jason Gunthorpe
` (11 preceding siblings ...)
2024-08-21 17:37 ` [PATCH 12/14] iommu/amd: Remove the confusing dummy iommu_flush_ops tlb ops Jason Gunthorpe
@ 2024-08-21 17:37 ` Jason Gunthorpe
2024-08-28 13:50 ` Vasant Hegde
2024-08-21 17:37 ` [PATCH 14/14] iommu/amd: Do not set the D bit on AMD v2 table entries Jason Gunthorpe
13 siblings, 1 reply; 32+ messages in thread
From: Jason Gunthorpe @ 2024-08-21 17:37 UTC (permalink / raw)
To: iommu, Joerg Roedel, Robin Murphy, Suravee Suthikulpanit,
Will Deacon
Cc: Alejandro Jimenez, Joao Martins, Joerg Roedel, patches,
Vasant Hegde
The HW only has 52 bits of physical address support, the supported page
sizes should not have bits set beyond this. Further the spec says that the
6th level does not support any "default page size for translation entries"
meaning leafs in the 6th level are not allowed too.
Rework the definition to use GENMASK to build the range of supported pages
from the top of physical to 4k.
Nothing ever uses such large pages, so this is a cosmetic/documentation
improvement only.
Reported-by: Joao Martins <joao.m.martins@oracle.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
---
drivers/iommu/amd/amd_iommu_types.h | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/drivers/iommu/amd/amd_iommu_types.h b/drivers/iommu/amd/amd_iommu_types.h
index 74dc003f5b7815..7eb546af8944da 100644
--- a/drivers/iommu/amd/amd_iommu_types.h
+++ b/drivers/iommu/amd/amd_iommu_types.h
@@ -294,8 +294,9 @@
* that we support.
*
* 512GB Pages are not supported due to a hardware bug
+ * Page sizes >= the 52 bit max physical address of the CPU are not supported.
*/
-#define AMD_IOMMU_PGSIZES ((~0xFFFUL) & ~(2ULL << 38))
+#define AMD_IOMMU_PGSIZES (GENMASK_ULL(51, 12) ^ SZ_512G)
/* 4K, 2MB, 1G page sizes are supported */
#define AMD_IOMMU_PGSIZES_V2 (PAGE_SIZE | (1ULL << 21) | (1ULL << 30))
--
2.46.0
^ permalink raw reply related [flat|nested] 32+ messages in thread
* [PATCH 14/14] iommu/amd: Do not set the D bit on AMD v2 table entries
2024-08-21 17:37 [PATCH 00/14] Minor fixups and refactorings for AMD's io-pgtable code Jason Gunthorpe
` (12 preceding siblings ...)
2024-08-21 17:37 ` [PATCH 13/14] iommu/amd: Correct the reported page sizes from the V1 table Jason Gunthorpe
@ 2024-08-21 17:37 ` Jason Gunthorpe
2024-08-28 6:41 ` Vasant Hegde
13 siblings, 1 reply; 32+ messages in thread
From: Jason Gunthorpe @ 2024-08-21 17:37 UTC (permalink / raw)
To: iommu, Joerg Roedel, Robin Murphy, Suravee Suthikulpanit,
Will Deacon
Cc: Alejandro Jimenez, Joao Martins, Joerg Roedel, patches,
Vasant Hegde
The manual says that bit 6 is IGN for all Page-Table Base Address
pointers, don't set it.
Fixes: aaac38f61487 ("iommu/amd: Initial support for AMD IOMMU v2 page table")
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
---
drivers/iommu/amd/io_pgtable_v2.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/iommu/amd/io_pgtable_v2.c b/drivers/iommu/amd/io_pgtable_v2.c
index 77cc1b4a3f0225..25b9042fa45307 100644
--- a/drivers/iommu/amd/io_pgtable_v2.c
+++ b/drivers/iommu/amd/io_pgtable_v2.c
@@ -51,7 +51,7 @@ static inline u64 set_pgtable_attr(u64 *page)
u64 prot;
prot = IOMMU_PAGE_PRESENT | IOMMU_PAGE_RW | IOMMU_PAGE_USER;
- prot |= IOMMU_PAGE_ACCESS | IOMMU_PAGE_DIRTY;
+ prot |= IOMMU_PAGE_ACCESS;
return (iommu_virt_to_phys(page) | prot);
}
--
2.46.0
^ permalink raw reply related [flat|nested] 32+ messages in thread
* Re: [PATCH 01/14] iommu/amd: Move allocation of the top table into v1_alloc_pgtable
2024-08-21 17:37 ` [PATCH 01/14] iommu/amd: Move allocation of the top table into v1_alloc_pgtable Jason Gunthorpe
@ 2024-08-23 9:26 ` Joerg Roedel
2024-08-23 12:14 ` Jason Gunthorpe
2024-08-28 6:22 ` Vasant Hegde
1 sibling, 1 reply; 32+ messages in thread
From: Joerg Roedel @ 2024-08-23 9:26 UTC (permalink / raw)
To: Jason Gunthorpe
Cc: iommu, Robin Murphy, Suravee Suthikulpanit, Will Deacon,
Alejandro Jimenez, Joao Martins, Joerg Roedel, patches,
Vasant Hegde
On Wed, Aug 21, 2024 at 02:37:07PM -0300, Jason Gunthorpe wrote:
> All the page table memory should be allocated/free within the io_pgtable
> struct. The v2 path is already doing this, make it consistent.
>
> It is hard to see but the free of the root in protection_domain_free() is
> a NOP on the success path because v1_free_pgtable() does
> amd_iommu_domain_clr_pt_root().
>
> The root memory is already freed because free_sub_pt() put it on the
> freelist. The free path in protection_domain_free() is only used during
> error unwind of protection_domain_alloc().
>
> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
> ---
> drivers/iommu/amd/io_pgtable.c | 8 ++++++--
> drivers/iommu/amd/iommu.c | 21 ++-------------------
> 2 files changed, 8 insertions(+), 21 deletions(-)
>
> diff --git a/drivers/iommu/amd/io_pgtable.c b/drivers/iommu/amd/io_pgtable.c
> index 1074ee25064d06..05aed3cb46f1bf 100644
> --- a/drivers/iommu/amd/io_pgtable.c
> +++ b/drivers/iommu/amd/io_pgtable.c
> @@ -574,20 +574,24 @@ static void v1_free_pgtable(struct io_pgtable *iop)
> pgtable->mode > PAGE_MODE_6_LEVEL);
>
> free_sub_pt(pgtable->root, pgtable->mode, &freelist);
> + iommu_put_pages_list(&freelist);
You are freeing pages here before the IOMMU and device TLBs are flushed,
no?
>
> /* Update data structure */
> amd_iommu_domain_clr_pt_root(dom);
>
> /* Make changes visible to IOMMUs */
> amd_iommu_domain_update(dom);
> -
> - iommu_put_pages_list(&freelist);
> }
Regards,
Joerg
^ permalink raw reply [flat|nested] 32+ messages in thread
* Re: [PATCH 01/14] iommu/amd: Move allocation of the top table into v1_alloc_pgtable
2024-08-23 9:26 ` Joerg Roedel
@ 2024-08-23 12:14 ` Jason Gunthorpe
0 siblings, 0 replies; 32+ messages in thread
From: Jason Gunthorpe @ 2024-08-23 12:14 UTC (permalink / raw)
To: Joerg Roedel
Cc: iommu, Robin Murphy, Suravee Suthikulpanit, Will Deacon,
Alejandro Jimenez, Joao Martins, Joerg Roedel, patches,
Vasant Hegde
On Fri, Aug 23, 2024 at 11:26:17AM +0200, Joerg Roedel wrote:
> On Wed, Aug 21, 2024 at 02:37:07PM -0300, Jason Gunthorpe wrote:
> > All the page table memory should be allocated/free within the io_pgtable
> > struct. The v2 path is already doing this, make it consistent.
> >
> > It is hard to see but the free of the root in protection_domain_free() is
> > a NOP on the success path because v1_free_pgtable() does
> > amd_iommu_domain_clr_pt_root().
> >
> > The root memory is already freed because free_sub_pt() put it on the
> > freelist. The free path in protection_domain_free() is only used during
> > error unwind of protection_domain_alloc().
> >
> > Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
> > ---
> > drivers/iommu/amd/io_pgtable.c | 8 ++++++--
> > drivers/iommu/amd/iommu.c | 21 ++-------------------
> > 2 files changed, 8 insertions(+), 21 deletions(-)
> >
> > diff --git a/drivers/iommu/amd/io_pgtable.c b/drivers/iommu/amd/io_pgtable.c
> > index 1074ee25064d06..05aed3cb46f1bf 100644
> > --- a/drivers/iommu/amd/io_pgtable.c
> > +++ b/drivers/iommu/amd/io_pgtable.c
> > @@ -574,20 +574,24 @@ static void v1_free_pgtable(struct io_pgtable *iop)
> > pgtable->mode > PAGE_MODE_6_LEVEL);
> >
> > free_sub_pt(pgtable->root, pgtable->mode, &freelist);
> > + iommu_put_pages_list(&freelist);
>
> You are freeing pages here before the IOMMU and device TLBs are flushed,
> no?
No see
[PATCH 04/14] iommu/amd: Remove amd_iommu_domain_update() from page table freeing
This is on the domain free path. It is required that the domain not be
attached to anything. All the TLBs were flushed during the detatch
operations already.
Jason
^ permalink raw reply [flat|nested] 32+ messages in thread
* Re: [PATCH 02/14] iommu/amd: Allocate the page table root using GFP_KERNEL
2024-08-21 17:37 ` [PATCH 02/14] iommu/amd: Allocate the page table root using GFP_KERNEL Jason Gunthorpe
@ 2024-08-27 14:38 ` Vasant Hegde
0 siblings, 0 replies; 32+ messages in thread
From: Vasant Hegde @ 2024-08-27 14:38 UTC (permalink / raw)
To: Jason Gunthorpe, iommu, Joerg Roedel, Robin Murphy,
Suravee Suthikulpanit, Will Deacon
Cc: Alejandro Jimenez, Joao Martins, Joerg Roedel, patches
On 8/21/2024 11:07 PM, Jason Gunthorpe wrote:
> Domain allocation is always done under a sleepable context, the v1 path
> and other drivers use GFP_KERNEL already. Fix the v2 path to also use
> GFP_KERNEL.
>
> Fixes: 0d571dcbe7c6 ("iommu/amd: Allocate page table using numa locality info")
> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
Reviewed-by: Vasant Hegde <vasant.hegde@amd.com>
-Vasant
^ permalink raw reply [flat|nested] 32+ messages in thread
* Re: [PATCH 01/14] iommu/amd: Move allocation of the top table into v1_alloc_pgtable
2024-08-21 17:37 ` [PATCH 01/14] iommu/amd: Move allocation of the top table into v1_alloc_pgtable Jason Gunthorpe
2024-08-23 9:26 ` Joerg Roedel
@ 2024-08-28 6:22 ` Vasant Hegde
1 sibling, 0 replies; 32+ messages in thread
From: Vasant Hegde @ 2024-08-28 6:22 UTC (permalink / raw)
To: Jason Gunthorpe, iommu, Joerg Roedel, Robin Murphy,
Suravee Suthikulpanit, Will Deacon
Cc: Alejandro Jimenez, Joao Martins, Joerg Roedel, patches
On 8/21/2024 11:07 PM, Jason Gunthorpe wrote:
> All the page table memory should be allocated/free within the io_pgtable
> struct. The v2 path is already doing this, make it consistent.
>
> It is hard to see but the free of the root in protection_domain_free() is
> a NOP on the success path because v1_free_pgtable() does
> amd_iommu_domain_clr_pt_root().
>
> The root memory is already freed because free_sub_pt() put it on the
> freelist. The free path in protection_domain_free() is only used during
> error unwind of protection_domain_alloc().
>
> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
Reviewed-by: Vasant Hegde <vasant.hegde@amd.com>
-Vasant
^ permalink raw reply [flat|nested] 32+ messages in thread
* Re: [PATCH 03/14] iommu/amd: Set the pgsize_bitmap correctly
2024-08-21 17:37 ` [PATCH 03/14] iommu/amd: Set the pgsize_bitmap correctly Jason Gunthorpe
@ 2024-08-28 6:26 ` Vasant Hegde
0 siblings, 0 replies; 32+ messages in thread
From: Vasant Hegde @ 2024-08-28 6:26 UTC (permalink / raw)
To: Jason Gunthorpe, iommu, Joerg Roedel, Robin Murphy,
Suravee Suthikulpanit, Will Deacon
Cc: Alejandro Jimenez, Joao Martins, Joerg Roedel, patches
On 8/21/2024 11:07 PM, Jason Gunthorpe wrote:
> When using io_pgtable the correct pgsize_bitmap is stored in the cfg, both
> v1_alloc_pgtable() and v2_alloc_pgtable() set it correctly.
>
> This fixes a bug where the v2 pgtable had the wrong pgsize as
> protection_domain_init_v2() would set it and then do_iommu_domain_alloc()
> immediately resets it.
>
> Remove the confusing ops.pgsize_bitmap since that is not used if the
> driver sets domain.pgsize_bitmap.
>
> Fixes: 134288158a41 ("iommu/amd: Add domain_alloc_user based domain allocation")
> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
Reviewed-by: Vasant Hegde <vasant.hegde@amd.com>
-Vasant
^ permalink raw reply [flat|nested] 32+ messages in thread
* Re: [PATCH 04/14] iommu/amd: Remove amd_iommu_domain_update() from page table freeing
2024-08-21 17:37 ` [PATCH 04/14] iommu/amd: Remove amd_iommu_domain_update() from page table freeing Jason Gunthorpe
@ 2024-08-28 6:26 ` Vasant Hegde
0 siblings, 0 replies; 32+ messages in thread
From: Vasant Hegde @ 2024-08-28 6:26 UTC (permalink / raw)
To: Jason Gunthorpe, iommu, Joerg Roedel, Robin Murphy,
Suravee Suthikulpanit, Will Deacon
Cc: Alejandro Jimenez, Joao Martins, Joerg Roedel, patches
On 8/21/2024 11:07 PM, Jason Gunthorpe wrote:
> It is a serious bug if the domain is still mapped to any DTEs when it is
> freed as we immediately start freeing page table memory, so any remaining
> HW touch will UAF.
>
> If it is not mapped then dev_list is empty and amd_iommu_domain_update()
> does nothing.
>
> Remove it and add a WARN_ON() to catch this class of bug.
>
> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
Reviewed-by: Vasant Hegde <vasant.hegde@amd.com>
-Vasant
^ permalink raw reply [flat|nested] 32+ messages in thread
* Re: [PATCH 05/14] iommu/amd: Remove the amd_iommu_domain_set_pt_root() and related
2024-08-21 17:37 ` [PATCH 05/14] iommu/amd: Remove the amd_iommu_domain_set_pt_root() and related Jason Gunthorpe
@ 2024-08-28 6:27 ` Vasant Hegde
0 siblings, 0 replies; 32+ messages in thread
From: Vasant Hegde @ 2024-08-28 6:27 UTC (permalink / raw)
To: Jason Gunthorpe, iommu, Joerg Roedel, Robin Murphy,
Suravee Suthikulpanit, Will Deacon
Cc: Alejandro Jimenez, Joao Martins, Joerg Roedel, patches
On 8/21/2024 11:07 PM, Jason Gunthorpe wrote:
> Looks like many refactorings here have left this confused. There is only
> one storage of the root/mode, it is in the iop struct.
>
> increase_address_space() calls amd_iommu_domain_set_pgtable() with values
> that it already stored in iop a few lines above.
>
> amd_iommu_domain_clr_pt_root() is zero'ing memory we are about to free. It
> used to protect agains a double free of root, but that is gone now.
s/agains/against/
>
> Remove amd_iommu_domain_set_pgtable(), amd_iommu_domain_set_pt_root(),
> amd_iommu_domain_clr_pt_root() as they are all pointless.
>
> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
Reviewed-by: Vasant Hegde <vasant.hegde@amd.com>
-Vasant
^ permalink raw reply [flat|nested] 32+ messages in thread
* Re: [PATCH 06/14] iommu/amd: Rename struct amd_io_pgtable iopt to pgtbl
2024-08-21 17:37 ` [PATCH 06/14] iommu/amd: Rename struct amd_io_pgtable iopt to pgtbl Jason Gunthorpe
@ 2024-08-28 6:28 ` Vasant Hegde
0 siblings, 0 replies; 32+ messages in thread
From: Vasant Hegde @ 2024-08-28 6:28 UTC (permalink / raw)
To: Jason Gunthorpe, iommu, Joerg Roedel, Robin Murphy,
Suravee Suthikulpanit, Will Deacon
Cc: Alejandro Jimenez, Joao Martins, Joerg Roedel, patches
On 8/21/2024 11:07 PM, Jason Gunthorpe wrote:
> There is struct protection_domain iopt and struct amd_io_pgtable iopt.
> Next patches are going to want to write domain.iopt.iopt.xx which is quite
> unnatural to read.
>
> Give one of them a different name, amd_io_pgtable has fewer references so
> call it pgtbl, to match pgtbl_cfg, instead.
>
> Suggested-by: Alejandro Jimenez <alejandro.j.jimenez@oracle.com>
> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
Reviewed-by: Vasant Hegde <vasant.hegde@amd.com>
-Vasant
^ permalink raw reply [flat|nested] 32+ messages in thread
* Re: [PATCH 07/14] iommu/amd: Remove amd_io_pgtable::pgtbl_cfg
2024-08-21 17:37 ` [PATCH 07/14] iommu/amd: Remove amd_io_pgtable::pgtbl_cfg Jason Gunthorpe
@ 2024-08-28 6:37 ` Vasant Hegde
0 siblings, 0 replies; 32+ messages in thread
From: Vasant Hegde @ 2024-08-28 6:37 UTC (permalink / raw)
To: Jason Gunthorpe, iommu, Joerg Roedel, Robin Murphy,
Suravee Suthikulpanit, Will Deacon
Cc: Alejandro Jimenez, Joao Martins, Joerg Roedel, patches
On 8/21/2024 11:07 PM, Jason Gunthorpe wrote:
> This struct is already in iop.cfg, we don't need two.
>
> AMD is using this API sort of wrong, the cfg is supposed to be passed in
> and then the allocation function will allocate ops memory and copy the
> passed config into the new memory. Keep it kind of wrong and pass in the
> cfg memory that is already part of the pagetable struct.
>
> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
Reviewed-by: Vasant Hegde <vasant.hegde@amd.com>
-Vasant
^ permalink raw reply [flat|nested] 32+ messages in thread
* Re: [PATCH 08/14] iommu/amd: Store the nid in io_pgtable_cfg instead of the domain
2024-08-21 17:37 ` [PATCH 08/14] iommu/amd: Store the nid in io_pgtable_cfg instead of the domain Jason Gunthorpe
@ 2024-08-28 6:39 ` Vasant Hegde
2024-08-28 18:13 ` Jason Gunthorpe
0 siblings, 1 reply; 32+ messages in thread
From: Vasant Hegde @ 2024-08-28 6:39 UTC (permalink / raw)
To: Jason Gunthorpe, iommu, Joerg Roedel, Robin Murphy,
Suravee Suthikulpanit, Will Deacon
Cc: Alejandro Jimenez, Joao Martins, Joerg Roedel, patches
Jason,
On 8/21/2024 11:07 PM, Jason Gunthorpe wrote:
> We already have memory in the union here that is being wasted in AMD's
> case, use it to store the nid.
>
> Putting the nid here further isolates the io_pgtable code from the struct
> protection_domain.
>
> Fixup protection_domain_alloc so that the NID from the device is provided,
> at this point dev is never NULL for AMD so this will now allocate the
> first table pointer on the correct NUMA node.
>
> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
> ---
> drivers/iommu/amd/amd_iommu.h | 2 +-
> drivers/iommu/amd/amd_iommu_types.h | 1 -
> drivers/iommu/amd/io_pgtable.c | 7 ++++---
> drivers/iommu/amd/io_pgtable_v2.c | 5 ++---
> drivers/iommu/amd/iommu.c | 11 ++++++-----
> drivers/iommu/amd/pasid.c | 2 +-
> include/linux/io-pgtable.h | 4 ++++
> 7 files changed, 18 insertions(+), 14 deletions(-)
>
> diff --git a/drivers/iommu/amd/amd_iommu.h b/drivers/iommu/amd/amd_iommu.h
> index 5a050080d2e814..5459f726fb29d6 100644
> --- a/drivers/iommu/amd/amd_iommu.h
> +++ b/drivers/iommu/amd/amd_iommu.h
> @@ -45,7 +45,7 @@ extern enum io_pgtable_fmt amd_iommu_pgtable;
> extern int amd_iommu_gpt_level;
>
> /* Protection domain ops */
> -struct protection_domain *protection_domain_alloc(unsigned int type);
> +struct protection_domain *protection_domain_alloc(unsigned int type, int nid);
> void protection_domain_free(struct protection_domain *domain);
> struct iommu_domain *amd_iommu_domain_alloc_sva(struct device *dev,
> struct mm_struct *mm);
> diff --git a/drivers/iommu/amd/amd_iommu_types.h b/drivers/iommu/amd/amd_iommu_types.h
> index ef4c4887cbbbd5..74dc003f5b7815 100644
> --- a/drivers/iommu/amd/amd_iommu_types.h
> +++ b/drivers/iommu/amd/amd_iommu_types.h
> @@ -579,7 +579,6 @@ struct protection_domain {
> struct amd_io_pgtable iop;
> spinlock_t lock; /* mostly used to lock the page table*/
> u16 id; /* the domain id written to the device table */
> - int nid; /* Node ID */
> enum protection_domain_mode pd_mode; /* Track page table type */
> bool dirty_tracking; /* dirty tracking is enabled in the domain */
> unsigned dev_cnt; /* devices assigned to this domain */
> diff --git a/drivers/iommu/amd/io_pgtable.c b/drivers/iommu/amd/io_pgtable.c
> index dab1cf53b1f3ff..d56be842c0b71e 100644
> --- a/drivers/iommu/amd/io_pgtable.c
> +++ b/drivers/iommu/amd/io_pgtable.c
> @@ -145,7 +145,7 @@ static bool increase_address_space(struct protection_domain *domain,
> bool ret = true;
> u64 *pte;
>
> - pte = iommu_alloc_page_node(domain->nid, gfp);
> + pte = iommu_alloc_page_node(domain->iop.pgtbl.cfg.amd.nid, gfp);
Too many level and used in multiple places. IMO its worth having helper function
to set/get nid.
-Vasant
^ permalink raw reply [flat|nested] 32+ messages in thread
* Re: [PATCH 11/14] iommu/amd: Fix typo of , instead of ;
2024-08-21 17:37 ` [PATCH 11/14] iommu/amd: Fix typo of , instead of ; Jason Gunthorpe
@ 2024-08-28 6:39 ` Vasant Hegde
0 siblings, 0 replies; 32+ messages in thread
From: Vasant Hegde @ 2024-08-28 6:39 UTC (permalink / raw)
To: Jason Gunthorpe, iommu, Joerg Roedel, Robin Murphy,
Suravee Suthikulpanit, Will Deacon
Cc: Alejandro Jimenez, Joao Martins, Joerg Roedel, patches
On 8/21/2024 11:07 PM, Jason Gunthorpe wrote:
> Generates the same code, but is not the expected C style.
>
> Fixes: aaac38f61487 ("iommu/amd: Initial support for AMD IOMMU v2 page table")
> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
Reviewed-by: Vasant Hegde <vasant.hegde@amd.com>
-Vasant
^ permalink raw reply [flat|nested] 32+ messages in thread
* Re: [PATCH 14/14] iommu/amd: Do not set the D bit on AMD v2 table entries
2024-08-21 17:37 ` [PATCH 14/14] iommu/amd: Do not set the D bit on AMD v2 table entries Jason Gunthorpe
@ 2024-08-28 6:41 ` Vasant Hegde
0 siblings, 0 replies; 32+ messages in thread
From: Vasant Hegde @ 2024-08-28 6:41 UTC (permalink / raw)
To: Jason Gunthorpe, iommu, Joerg Roedel, Robin Murphy,
Suravee Suthikulpanit, Will Deacon
Cc: Alejandro Jimenez, Joao Martins, Joerg Roedel, patches
On 8/21/2024 11:07 PM, Jason Gunthorpe wrote:
> The manual says that bit 6 is IGN for all Page-Table Base Address
> pointers, don't set it.
>
> Fixes: aaac38f61487 ("iommu/amd: Initial support for AMD IOMMU v2 page table")
> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
Reviewed-by: Vasant Hegde <vasant.hegde@amd.com>
-Vasant
^ permalink raw reply [flat|nested] 32+ messages in thread
* Re: [PATCH 12/14] iommu/amd: Remove the confusing dummy iommu_flush_ops tlb ops
2024-08-21 17:37 ` [PATCH 12/14] iommu/amd: Remove the confusing dummy iommu_flush_ops tlb ops Jason Gunthorpe
@ 2024-08-28 6:42 ` Vasant Hegde
0 siblings, 0 replies; 32+ messages in thread
From: Vasant Hegde @ 2024-08-28 6:42 UTC (permalink / raw)
To: Jason Gunthorpe, iommu, Joerg Roedel, Robin Murphy,
Suravee Suthikulpanit, Will Deacon
Cc: Alejandro Jimenez, Joao Martins, Joerg Roedel, patches
On 8/21/2024 11:07 PM, Jason Gunthorpe wrote:
> The iommu driver is supposed to provide these ops to its io_pgtable
> implementation so that it can hook the invalidations and do the right
> thing.
>
> They are called by wrapper functions like io_pgtable_tlb_add_page() etc,
> which the AMD code never calls.
>
> Instead it directly calls the AMD IOMMU invalidation functions by casting
> to the struct protection_domain. Remove it all.
>
> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
Reviewed-by: Vasant Hegde <vasant.hegde@amd.com>
-Vasant
^ permalink raw reply [flat|nested] 32+ messages in thread
* Re: [PATCH 10/14] iommu/amd: Remove conditions from domain free paths
2024-08-21 17:37 ` [PATCH 10/14] iommu/amd: Remove conditions from domain free paths Jason Gunthorpe
@ 2024-08-28 6:56 ` Vasant Hegde
0 siblings, 0 replies; 32+ messages in thread
From: Vasant Hegde @ 2024-08-28 6:56 UTC (permalink / raw)
To: Jason Gunthorpe, iommu, Joerg Roedel, Robin Murphy,
Suravee Suthikulpanit, Will Deacon
Cc: Alejandro Jimenez, Joao Martins, Joerg Roedel, patches
On 8/21/2024 11:07 PM, Jason Gunthorpe wrote:
> Don't use tlb as some flag to indicate if protection_domain_alloc()
> completed. Have protection_domain_alloc() unwind itself in the normal
> kernel style and require protection_domain_free() only be called on
> successful results of protection_domain_alloc().
>
> Also, the amd_iommu_domain_free() op is never called by the core code with
> a NULL argument, so remove all the NULL tests as well.
>
> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
Reviewed-by: Vasant Hegde <vasant.hegde@amd.com>
-Vasant
^ permalink raw reply [flat|nested] 32+ messages in thread
* Re: [PATCH 13/14] iommu/amd: Correct the reported page sizes from the V1 table
2024-08-21 17:37 ` [PATCH 13/14] iommu/amd: Correct the reported page sizes from the V1 table Jason Gunthorpe
@ 2024-08-28 13:50 ` Vasant Hegde
0 siblings, 0 replies; 32+ messages in thread
From: Vasant Hegde @ 2024-08-28 13:50 UTC (permalink / raw)
To: Jason Gunthorpe, iommu, Joerg Roedel, Robin Murphy,
Suravee Suthikulpanit, Will Deacon
Cc: Alejandro Jimenez, Joao Martins, Joerg Roedel, patches
On 8/21/2024 11:07 PM, Jason Gunthorpe wrote:
> The HW only has 52 bits of physical address support, the supported page
> sizes should not have bits set beyond this. Further the spec says that the
> 6th level does not support any "default page size for translation entries"
> meaning leafs in the 6th level are not allowed too.
>
> Rework the definition to use GENMASK to build the range of supported pages
> from the top of physical to 4k.
>
> Nothing ever uses such large pages, so this is a cosmetic/documentation
> improvement only.
>
> Reported-by: Joao Martins <joao.m.martins@oracle.com>
> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
Reviewed-by: Vasant Hegde <vasant.hegde@amd.com>
-Vasant
^ permalink raw reply [flat|nested] 32+ messages in thread
* Re: [PATCH 08/14] iommu/amd: Store the nid in io_pgtable_cfg instead of the domain
2024-08-28 6:39 ` Vasant Hegde
@ 2024-08-28 18:13 ` Jason Gunthorpe
2024-08-29 10:47 ` Vasant Hegde
0 siblings, 1 reply; 32+ messages in thread
From: Jason Gunthorpe @ 2024-08-28 18:13 UTC (permalink / raw)
To: Vasant Hegde
Cc: iommu, Joerg Roedel, Robin Murphy, Suravee Suthikulpanit,
Will Deacon, Alejandro Jimenez, Joao Martins, Joerg Roedel,
patches
On Wed, Aug 28, 2024 at 12:09:01PM +0530, Vasant Hegde wrote:
> > @@ -145,7 +145,7 @@ static bool increase_address_space(struct protection_domain *domain,
> > bool ret = true;
> > u64 *pte;
> >
> > - pte = iommu_alloc_page_node(domain->nid, gfp);
> > + pte = iommu_alloc_page_node(domain->iop.pgtbl.cfg.amd.nid, gfp);
>
> Too many level and used in multiple places. IMO its worth having helper function
> to set/get nid.
There would need to be four functions - set/get domain/pgtbl which
seems like overkill.. How about we just put some variables in:
diff --git a/drivers/iommu/amd/io_pgtable.c b/drivers/iommu/amd/io_pgtable.c
index d56be842c0b71e..f71140416fcf6c 100644
--- a/drivers/iommu/amd/io_pgtable.c
+++ b/drivers/iommu/amd/io_pgtable.c
@@ -141,11 +141,12 @@ static bool increase_address_space(struct protection_domain *domain,
unsigned long address,
gfp_t gfp)
{
+ struct io_pgtable_cfg *cfg = &domain->iop.pgtbl.cfg;
unsigned long flags;
bool ret = true;
u64 *pte;
- pte = iommu_alloc_page_node(domain->iop.pgtbl.cfg.amd.nid, gfp);
+ pte = iommu_alloc_page_node(cfg->amd.nid, gfp);
if (!pte)
return false;
@@ -182,6 +183,7 @@ static u64 *alloc_pte(struct protection_domain *domain,
gfp_t gfp,
bool *updated)
{
+ struct io_pgtable_cfg *cfg = &domain->iop.pgtbl.cfg;
int level, end_lvl;
u64 *pte, *page;
@@ -233,8 +235,7 @@ static u64 *alloc_pte(struct protection_domain *domain,
if (!IOMMU_PTE_PRESENT(__pte) ||
pte_level == PAGE_MODE_NONE) {
- page = iommu_alloc_page_node(
- domain->iop.pgtbl.cfg.amd.nid, gfp);
+ page = iommu_alloc_page_node(cfg->amd.nid, gfp);
if (!page)
return NULL;
diff --git a/drivers/iommu/amd/iommu.c b/drivers/iommu/amd/iommu.c
index 1cdd1675a01af2..806156632bf70a 100644
--- a/drivers/iommu/amd/iommu.c
+++ b/drivers/iommu/amd/iommu.c
@@ -2030,6 +2030,7 @@ static int do_attach(struct iommu_dev_data *dev_data,
struct protection_domain *domain)
{
struct amd_iommu *iommu = get_amd_iommu_from_dev_data(dev_data);
+ struct io_pgtable_cfg *cfg = &domain->iop.pgtbl.cfg;
int ret = 0;
/* Update data structures */
@@ -2037,8 +2038,8 @@ static int do_attach(struct iommu_dev_data *dev_data,
list_add(&dev_data->list, &domain->dev_list);
/* Update NUMA Node ID */
- if (domain->iop.pgtbl.cfg.amd.nid == NUMA_NO_NODE)
- domain->iop.pgtbl.cfg.amd.nid = dev_to_node(dev_data->dev);
+ if (cfg->amd.nid == NUMA_NO_NODE)
+ cfg->amd.nid = dev_to_node(dev_data->dev);
/* Do reference counting */
domain->dev_iommu[iommu->index] += 1;
Thanks,
Jason
^ permalink raw reply related [flat|nested] 32+ messages in thread
* Re: [PATCH 08/14] iommu/amd: Store the nid in io_pgtable_cfg instead of the domain
2024-08-28 18:13 ` Jason Gunthorpe
@ 2024-08-29 10:47 ` Vasant Hegde
0 siblings, 0 replies; 32+ messages in thread
From: Vasant Hegde @ 2024-08-29 10:47 UTC (permalink / raw)
To: Jason Gunthorpe
Cc: iommu, Joerg Roedel, Robin Murphy, Suravee Suthikulpanit,
Will Deacon, Alejandro Jimenez, Joao Martins, Joerg Roedel,
patches
Jason,
On 8/28/2024 11:43 PM, Jason Gunthorpe wrote:
> On Wed, Aug 28, 2024 at 12:09:01PM +0530, Vasant Hegde wrote:
>>> @@ -145,7 +145,7 @@ static bool increase_address_space(struct protection_domain *domain,
>>> bool ret = true;
>>> u64 *pte;
>>>
>>> - pte = iommu_alloc_page_node(domain->nid, gfp);
>>> + pte = iommu_alloc_page_node(domain->iop.pgtbl.cfg.amd.nid, gfp);
>>
>> Too many level and used in multiple places. IMO its worth having helper function
>> to set/get nid.
>
> There would need to be four functions - set/get domain/pgtbl which
> seems like overkill.. How about we just put some variables in:
This is better. Thanks!
-Vasant
>
> diff --git a/drivers/iommu/amd/io_pgtable.c b/drivers/iommu/amd/io_pgtable.c
> index d56be842c0b71e..f71140416fcf6c 100644
> --- a/drivers/iommu/amd/io_pgtable.c
> +++ b/drivers/iommu/amd/io_pgtable.c
> @@ -141,11 +141,12 @@ static bool increase_address_space(struct protection_domain *domain,
> unsigned long address,
> gfp_t gfp)
> {
> + struct io_pgtable_cfg *cfg = &domain->iop.pgtbl.cfg;
> unsigned long flags;
> bool ret = true;
> u64 *pte;
>
> - pte = iommu_alloc_page_node(domain->iop.pgtbl.cfg.amd.nid, gfp);
> + pte = iommu_alloc_page_node(cfg->amd.nid, gfp);
> if (!pte)
> return false;
>
> @@ -182,6 +183,7 @@ static u64 *alloc_pte(struct protection_domain *domain,
> gfp_t gfp,
> bool *updated)
> {
> + struct io_pgtable_cfg *cfg = &domain->iop.pgtbl.cfg;
> int level, end_lvl;
> u64 *pte, *page;
>
> @@ -233,8 +235,7 @@ static u64 *alloc_pte(struct protection_domain *domain,
>
> if (!IOMMU_PTE_PRESENT(__pte) ||
> pte_level == PAGE_MODE_NONE) {
> - page = iommu_alloc_page_node(
> - domain->iop.pgtbl.cfg.amd.nid, gfp);
> + page = iommu_alloc_page_node(cfg->amd.nid, gfp);
>
> if (!page)
> return NULL;
> diff --git a/drivers/iommu/amd/iommu.c b/drivers/iommu/amd/iommu.c
> index 1cdd1675a01af2..806156632bf70a 100644
> --- a/drivers/iommu/amd/iommu.c
> +++ b/drivers/iommu/amd/iommu.c
> @@ -2030,6 +2030,7 @@ static int do_attach(struct iommu_dev_data *dev_data,
> struct protection_domain *domain)
> {
> struct amd_iommu *iommu = get_amd_iommu_from_dev_data(dev_data);
> + struct io_pgtable_cfg *cfg = &domain->iop.pgtbl.cfg;
> int ret = 0;
>
> /* Update data structures */
> @@ -2037,8 +2038,8 @@ static int do_attach(struct iommu_dev_data *dev_data,
> list_add(&dev_data->list, &domain->dev_list);
>
> /* Update NUMA Node ID */
> - if (domain->iop.pgtbl.cfg.amd.nid == NUMA_NO_NODE)
> - domain->iop.pgtbl.cfg.amd.nid = dev_to_node(dev_data->dev);
> + if (cfg->amd.nid == NUMA_NO_NODE)
> + cfg->amd.nid = dev_to_node(dev_data->dev);
>
> /* Do reference counting */
> domain->dev_iommu[iommu->index] += 1;
>
>
> Thanks,
> Jason
^ permalink raw reply [flat|nested] 32+ messages in thread
end of thread, other threads:[~2024-08-29 10:47 UTC | newest]
Thread overview: 32+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-08-21 17:37 [PATCH 00/14] Minor fixups and refactorings for AMD's io-pgtable code Jason Gunthorpe
2024-08-21 17:37 ` [PATCH 01/14] iommu/amd: Move allocation of the top table into v1_alloc_pgtable Jason Gunthorpe
2024-08-23 9:26 ` Joerg Roedel
2024-08-23 12:14 ` Jason Gunthorpe
2024-08-28 6:22 ` Vasant Hegde
2024-08-21 17:37 ` [PATCH 02/14] iommu/amd: Allocate the page table root using GFP_KERNEL Jason Gunthorpe
2024-08-27 14:38 ` Vasant Hegde
2024-08-21 17:37 ` [PATCH 03/14] iommu/amd: Set the pgsize_bitmap correctly Jason Gunthorpe
2024-08-28 6:26 ` Vasant Hegde
2024-08-21 17:37 ` [PATCH 04/14] iommu/amd: Remove amd_iommu_domain_update() from page table freeing Jason Gunthorpe
2024-08-28 6:26 ` Vasant Hegde
2024-08-21 17:37 ` [PATCH 05/14] iommu/amd: Remove the amd_iommu_domain_set_pt_root() and related Jason Gunthorpe
2024-08-28 6:27 ` Vasant Hegde
2024-08-21 17:37 ` [PATCH 06/14] iommu/amd: Rename struct amd_io_pgtable iopt to pgtbl Jason Gunthorpe
2024-08-28 6:28 ` Vasant Hegde
2024-08-21 17:37 ` [PATCH 07/14] iommu/amd: Remove amd_io_pgtable::pgtbl_cfg Jason Gunthorpe
2024-08-28 6:37 ` Vasant Hegde
2024-08-21 17:37 ` [PATCH 08/14] iommu/amd: Store the nid in io_pgtable_cfg instead of the domain Jason Gunthorpe
2024-08-28 6:39 ` Vasant Hegde
2024-08-28 18:13 ` Jason Gunthorpe
2024-08-29 10:47 ` Vasant Hegde
2024-08-21 17:37 ` [PATCH 09/14] iommu/amd: Narrow the use of struct protection_domain to invalidation Jason Gunthorpe
2024-08-21 17:37 ` [PATCH 10/14] iommu/amd: Remove conditions from domain free paths Jason Gunthorpe
2024-08-28 6:56 ` Vasant Hegde
2024-08-21 17:37 ` [PATCH 11/14] iommu/amd: Fix typo of , instead of ; Jason Gunthorpe
2024-08-28 6:39 ` Vasant Hegde
2024-08-21 17:37 ` [PATCH 12/14] iommu/amd: Remove the confusing dummy iommu_flush_ops tlb ops Jason Gunthorpe
2024-08-28 6:42 ` Vasant Hegde
2024-08-21 17:37 ` [PATCH 13/14] iommu/amd: Correct the reported page sizes from the V1 table Jason Gunthorpe
2024-08-28 13:50 ` Vasant Hegde
2024-08-21 17:37 ` [PATCH 14/14] iommu/amd: Do not set the D bit on AMD v2 table entries Jason Gunthorpe
2024-08-28 6:41 ` Vasant Hegde
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox