public inbox for nouveau@lists.freedesktop.org
 help / color / mirror / Atom feed
* [PATCH 0/3] nouveau/vmm: fix switching between small and large PTEs (series v2)
@ 2026-02-04  3:00 Dave Airlie
  2026-02-04  3:00 ` [PATCH 1/3] nouveau/vmm: rewrite pte tracker using a struct and bitfields Dave Airlie
                   ` (3 more replies)
  0 siblings, 4 replies; 6+ messages in thread
From: Dave Airlie @ 2026-02-04  3:00 UTC (permalink / raw)
  To: dri-devel; +Cc: nouveau

[This is a repost with a fix for a bug noticed in patch 2 from yesterday.]

The nouveau page table has dual page tables with special states for
tracking small vs large pages at the bottom level. However the current
code isn't designed with the higher level large page support in mind.

The nouveau_uvmm/gpuvm code can cause unrefs to get delayed, so things
like ref SPT, map SPT, unmap SPT, ref LPT, map LPT, unref SPT can happen.

unrefs can end up quite delayed and it shouldn't matter as unref should
just affect reference counts.

However at least the SPT unref path was overwriting the LPT value when
all SPT were unreffed even if an LPT was referenced in between.

This series refactors the code to use a union, then increases the size
as I think even with the current code there was enough ref counts for SPTE.
The last patch adds LPTE tracking.

Dave.


^ permalink raw reply	[flat|nested] 6+ messages in thread

* [PATCH 1/3] nouveau/vmm: rewrite pte tracker using a struct and bitfields.
  2026-02-04  3:00 [PATCH 0/3] nouveau/vmm: fix switching between small and large PTEs (series v2) Dave Airlie
@ 2026-02-04  3:00 ` Dave Airlie
  2026-02-04  3:00 ` [PATCH 2/3] nouveau/vmm: increase size of vmm pte tracker struct to u32 (v2) Dave Airlie
                   ` (2 subsequent siblings)
  3 siblings, 0 replies; 6+ messages in thread
From: Dave Airlie @ 2026-02-04  3:00 UTC (permalink / raw)
  To: dri-devel; +Cc: nouveau

From: Dave Airlie <airlied@redhat.com>

I want to increase the counters here and start tracking LPTs as well
as there are certain situations where userspace with mixed page sizes
can cause ref/unrefs to live longer so need better reference counting.

This should be entirely non-functional.

Signed-off-by: Dave Airlie <airlied@redhat.com>
---
 drivers/gpu/drm/nouveau/nvkm/subdev/mmu/vmm.c | 41 ++++++++++---------
 drivers/gpu/drm/nouveau/nvkm/subdev/mmu/vmm.h | 14 +++++--
 2 files changed, 31 insertions(+), 24 deletions(-)

diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/mmu/vmm.c b/drivers/gpu/drm/nouveau/nvkm/subdev/mmu/vmm.c
index f95c58b67633..efc334f6104c 100644
--- a/drivers/gpu/drm/nouveau/nvkm/subdev/mmu/vmm.c
+++ b/drivers/gpu/drm/nouveau/nvkm/subdev/mmu/vmm.c
@@ -53,7 +53,7 @@ nvkm_vmm_pt_new(const struct nvkm_vmm_desc *desc, bool sparse,
 		}
 	}
 
-	if (!(pgt = kzalloc(sizeof(*pgt) + lpte, GFP_KERNEL)))
+	if (!(pgt = kzalloc(sizeof(*pgt) + (sizeof(pgt->pte[0]) * lpte), GFP_KERNEL)))
 		return NULL;
 	pgt->page = page ? page->shift : 0;
 	pgt->sparse = sparse;
@@ -208,7 +208,7 @@ nvkm_vmm_unref_sptes(struct nvkm_vmm_iter *it, struct nvkm_vmm_pt *pgt,
 	 */
 	for (lpti = ptei >> sptb; ptes; spti = 0, lpti++) {
 		const u32 pten = min(sptn - spti, ptes);
-		pgt->pte[lpti] -= pten;
+		pgt->pte[lpti].s.sptes -= pten;
 		ptes -= pten;
 	}
 
@@ -218,9 +218,9 @@ nvkm_vmm_unref_sptes(struct nvkm_vmm_iter *it, struct nvkm_vmm_pt *pgt,
 
 	for (ptei = pteb = ptei >> sptb; ptei < lpti; pteb = ptei) {
 		/* Skip over any LPTEs that still have valid SPTEs. */
-		if (pgt->pte[pteb] & NVKM_VMM_PTE_SPTES) {
+		if (pgt->pte[pteb].s.sptes) {
 			for (ptes = 1, ptei++; ptei < lpti; ptes++, ptei++) {
-				if (!(pgt->pte[ptei] & NVKM_VMM_PTE_SPTES))
+				if (!(pgt->pte[ptei].s.sptes))
 					break;
 			}
 			continue;
@@ -232,14 +232,14 @@ nvkm_vmm_unref_sptes(struct nvkm_vmm_iter *it, struct nvkm_vmm_pt *pgt,
 		 *
 		 * Determine how many LPTEs need to transition state.
 		 */
-		pgt->pte[ptei] &= ~NVKM_VMM_PTE_VALID;
+		pgt->pte[ptei].s.spte_valid = false;
 		for (ptes = 1, ptei++; ptei < lpti; ptes++, ptei++) {
-			if (pgt->pte[ptei] & NVKM_VMM_PTE_SPTES)
+			if (pgt->pte[ptei].s.sptes)
 				break;
-			pgt->pte[ptei] &= ~NVKM_VMM_PTE_VALID;
+			pgt->pte[ptei].s.spte_valid = false;
 		}
 
-		if (pgt->pte[pteb] & NVKM_VMM_PTE_SPARSE) {
+		if (pgt->pte[pteb].s.sparse) {
 			TRA(it, "LPTE %05x: U -> S %d PTEs", pteb, ptes);
 			pair->func->sparse(vmm, pgt->pt[0], pteb, ptes);
 		} else
@@ -307,7 +307,7 @@ nvkm_vmm_ref_sptes(struct nvkm_vmm_iter *it, struct nvkm_vmm_pt *pgt,
 	 */
 	for (lpti = ptei >> sptb; ptes; spti = 0, lpti++) {
 		const u32 pten = min(sptn - spti, ptes);
-		pgt->pte[lpti] += pten;
+		pgt->pte[lpti].s.sptes += pten;
 		ptes -= pten;
 	}
 
@@ -317,9 +317,9 @@ nvkm_vmm_ref_sptes(struct nvkm_vmm_iter *it, struct nvkm_vmm_pt *pgt,
 
 	for (ptei = pteb = ptei >> sptb; ptei < lpti; pteb = ptei) {
 		/* Skip over any LPTEs that already have valid SPTEs. */
-		if (pgt->pte[pteb] & NVKM_VMM_PTE_VALID) {
+		if (pgt->pte[pteb].s.spte_valid) {
 			for (ptes = 1, ptei++; ptei < lpti; ptes++, ptei++) {
-				if (!(pgt->pte[ptei] & NVKM_VMM_PTE_VALID))
+				if (!pgt->pte[ptei].s.spte_valid)
 					break;
 			}
 			continue;
@@ -331,14 +331,14 @@ nvkm_vmm_ref_sptes(struct nvkm_vmm_iter *it, struct nvkm_vmm_pt *pgt,
 		 *
 		 * Determine how many LPTEs need to transition state.
 		 */
-		pgt->pte[ptei] |= NVKM_VMM_PTE_VALID;
+		pgt->pte[ptei].s.spte_valid = true;
 		for (ptes = 1, ptei++; ptei < lpti; ptes++, ptei++) {
-			if (pgt->pte[ptei] & NVKM_VMM_PTE_VALID)
+			if (pgt->pte[ptei].s.spte_valid)
 				break;
-			pgt->pte[ptei] |= NVKM_VMM_PTE_VALID;
+			pgt->pte[ptei].s.spte_valid = true;
 		}
 
-		if (pgt->pte[pteb] & NVKM_VMM_PTE_SPARSE) {
+		if (pgt->pte[pteb].s.sparse) {
 			const u32 spti = pteb * sptn;
 			const u32 sptc = ptes * sptn;
 			/* The entire LPTE is marked as sparse, we need
@@ -386,7 +386,8 @@ nvkm_vmm_sparse_ptes(const struct nvkm_vmm_desc *desc,
 			pgt->pde[ptei++] = NVKM_VMM_PDE_SPARSE;
 	} else
 	if (desc->type == LPT) {
-		memset(&pgt->pte[ptei], NVKM_VMM_PTE_SPARSE, ptes);
+		union nvkm_pte_tracker sparse = { .s.sparse = 1 };
+		memset(&pgt->pte[ptei].u, sparse.u, ptes);
 	}
 }
 
@@ -398,7 +399,7 @@ nvkm_vmm_sparse_unref_ptes(struct nvkm_vmm_iter *it, bool pfn, u32 ptei, u32 pte
 		memset(&pt->pde[ptei], 0x00, sizeof(pt->pde[0]) * ptes);
 	else
 	if (it->desc->type == LPT)
-		memset(&pt->pte[ptei], 0x00, sizeof(pt->pte[0]) * ptes);
+		memset(&pt->pte[ptei].u, 0x00, sizeof(pt->pte[0]) * ptes);
 	return nvkm_vmm_unref_ptes(it, pfn, ptei, ptes);
 }
 
@@ -445,9 +446,9 @@ nvkm_vmm_ref_hwpt(struct nvkm_vmm_iter *it, struct nvkm_vmm_pt *pgd, u32 pdei)
 		 * the SPTEs on some GPUs.
 		 */
 		for (ptei = pteb = 0; ptei < pten; pteb = ptei) {
-			bool spte = pgt->pte[ptei] & NVKM_VMM_PTE_SPTES;
+			bool spte = !!pgt->pte[ptei].s.sptes;
 			for (ptes = 1, ptei++; ptei < pten; ptes++, ptei++) {
-				bool next = pgt->pte[ptei] & NVKM_VMM_PTE_SPTES;
+				bool next = !!pgt->pte[ptei].s.sptes;
 				if (spte != next)
 					break;
 			}
@@ -461,7 +462,7 @@ nvkm_vmm_ref_hwpt(struct nvkm_vmm_iter *it, struct nvkm_vmm_pt *pgd, u32 pdei)
 			} else {
 				desc->func->unmap(vmm, pt, pteb, ptes);
 				while (ptes--)
-					pgt->pte[pteb++] |= NVKM_VMM_PTE_VALID;
+					pgt->pte[pteb++].s.spte_valid = true;
 			}
 		}
 	} else {
diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/mmu/vmm.h b/drivers/gpu/drm/nouveau/nvkm/subdev/mmu/vmm.h
index 4586a425dbe4..a6312a0e6b84 100644
--- a/drivers/gpu/drm/nouveau/nvkm/subdev/mmu/vmm.h
+++ b/drivers/gpu/drm/nouveau/nvkm/subdev/mmu/vmm.h
@@ -4,6 +4,15 @@
 #include <core/memory.h>
 enum nvkm_memory_target;
 
+union nvkm_pte_tracker {
+	u8 u;
+	struct {
+		u8 sparse:1;
+		u8 spte_valid:1;
+		u8 sptes:6;
+	} s;
+};
+
 struct nvkm_vmm_pt {
 	/* Some GPUs have a mapping level with a dual page tables to
 	 * support large and small pages in the same address-range.
@@ -44,10 +53,7 @@ struct nvkm_vmm_pt {
 	 *
 	 * This information is used to manage LPTE state transitions.
 	 */
-#define NVKM_VMM_PTE_SPARSE 0x80
-#define NVKM_VMM_PTE_VALID  0x40
-#define NVKM_VMM_PTE_SPTES  0x3f
-	u8 pte[];
+	union nvkm_pte_tracker pte[];
 };
 
 typedef void (*nvkm_vmm_pxe_func)(struct nvkm_vmm *,
-- 
2.52.0


^ permalink raw reply related	[flat|nested] 6+ messages in thread

* [PATCH 2/3] nouveau/vmm: increase size of vmm pte tracker struct to u32 (v2)
  2026-02-04  3:00 [PATCH 0/3] nouveau/vmm: fix switching between small and large PTEs (series v2) Dave Airlie
  2026-02-04  3:00 ` [PATCH 1/3] nouveau/vmm: rewrite pte tracker using a struct and bitfields Dave Airlie
@ 2026-02-04  3:00 ` Dave Airlie
  2026-02-04  3:00 ` [PATCH 3/3] nouveau/vmm: start tracking if the LPT PTE is valid. (v6) Dave Airlie
  2026-02-04 12:43 ` [PATCH 0/3] nouveau/vmm: fix switching between small and large PTEs (series v2) Mary Guillemard
  3 siblings, 0 replies; 6+ messages in thread
From: Dave Airlie @ 2026-02-04  3:00 UTC (permalink / raw)
  To: dri-devel; +Cc: nouveau

From: Dave Airlie <airlied@redhat.com>

We need to tracker large counts of spte than previously due to unref
getting delayed sometimes.

This doesn't fix LPT tracking yet, it just creates space for it.

Signed-off-by: Dave Airlie <airlied@redhat.com>

---
v2: fix memset32 wrong length
---
 drivers/gpu/drm/nouveau/nvkm/subdev/mmu/vmm.c | 6 +++---
 drivers/gpu/drm/nouveau/nvkm/subdev/mmu/vmm.h | 9 +++++----
 2 files changed, 8 insertions(+), 7 deletions(-)

diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/mmu/vmm.c b/drivers/gpu/drm/nouveau/nvkm/subdev/mmu/vmm.c
index efc334f6104c..44daeec0aa6d 100644
--- a/drivers/gpu/drm/nouveau/nvkm/subdev/mmu/vmm.c
+++ b/drivers/gpu/drm/nouveau/nvkm/subdev/mmu/vmm.c
@@ -387,7 +387,7 @@ nvkm_vmm_sparse_ptes(const struct nvkm_vmm_desc *desc,
 	} else
 	if (desc->type == LPT) {
 		union nvkm_pte_tracker sparse = { .s.sparse = 1 };
-		memset(&pgt->pte[ptei].u, sparse.u, ptes);
+		memset32(&pgt->pte[ptei].u, sparse.u, ptes);
 	}
 }
 
@@ -399,7 +399,7 @@ nvkm_vmm_sparse_unref_ptes(struct nvkm_vmm_iter *it, bool pfn, u32 ptei, u32 pte
 		memset(&pt->pde[ptei], 0x00, sizeof(pt->pde[0]) * ptes);
 	else
 	if (it->desc->type == LPT)
-		memset(&pt->pte[ptei].u, 0x00, sizeof(pt->pte[0]) * ptes);
+		memset32(&pt->pte[ptei].u, 0x00, ptes);
 	return nvkm_vmm_unref_ptes(it, pfn, ptei, ptes);
 }
 
@@ -458,7 +458,7 @@ nvkm_vmm_ref_hwpt(struct nvkm_vmm_iter *it, struct nvkm_vmm_pt *pgd, u32 pdei)
 					desc->func->sparse(vmm, pt, pteb, ptes);
 				else
 					desc->func->invalid(vmm, pt, pteb, ptes);
-				memset(&pgt->pte[pteb], 0x00, ptes);
+				memset32(&pgt->pte[pteb].u, 0x00, ptes);
 			} else {
 				desc->func->unmap(vmm, pt, pteb, ptes);
 				while (ptes--)
diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/mmu/vmm.h b/drivers/gpu/drm/nouveau/nvkm/subdev/mmu/vmm.h
index a6312a0e6b84..a8b08126e8dc 100644
--- a/drivers/gpu/drm/nouveau/nvkm/subdev/mmu/vmm.h
+++ b/drivers/gpu/drm/nouveau/nvkm/subdev/mmu/vmm.h
@@ -5,11 +5,12 @@
 enum nvkm_memory_target;
 
 union nvkm_pte_tracker {
-	u8 u;
+	u32 u;
 	struct {
-		u8 sparse:1;
-		u8 spte_valid:1;
-		u8 sptes:6;
+		u32 sparse:1;
+		u32 spte_valid:1;
+		u32 padding:14;
+		u32 sptes:16;
 	} s;
 };
 
-- 
2.52.0


^ permalink raw reply related	[flat|nested] 6+ messages in thread

* [PATCH 3/3] nouveau/vmm: start tracking if the LPT PTE is valid. (v6)
  2026-02-04  3:00 [PATCH 0/3] nouveau/vmm: fix switching between small and large PTEs (series v2) Dave Airlie
  2026-02-04  3:00 ` [PATCH 1/3] nouveau/vmm: rewrite pte tracker using a struct and bitfields Dave Airlie
  2026-02-04  3:00 ` [PATCH 2/3] nouveau/vmm: increase size of vmm pte tracker struct to u32 (v2) Dave Airlie
@ 2026-02-04  3:00 ` Dave Airlie
  2026-02-04 12:43 ` [PATCH 0/3] nouveau/vmm: fix switching between small and large PTEs (series v2) Mary Guillemard
  3 siblings, 0 replies; 6+ messages in thread
From: Dave Airlie @ 2026-02-04  3:00 UTC (permalink / raw)
  To: dri-devel; +Cc: nouveau

From: Dave Airlie <airlied@redhat.com>

When NVK enabled large pages userspace tests were seeing fault
reports at a valid address.

There was a case where an address moving from 64k page to 4k pages
could expose a race between unmapping the 4k page, mapping the 64k
page and unref the 4k pages.

Unref 4k pages would cause the dual-page table handling to always
set the LPTE entry to SPARSE or INVALID, but if we'd mapped a valid
LPTE in the meantime, it would get trashed. Keep track of when
a valid LPTE has been referenced, and don't reset in that case.

This adds an lpte valid tracker and lpte reference count.

Whenever an lpte is referenced, it gets made valid and the ref count
increases, whenever it gets unreference the refcount is tracked.

Link: https://gitlab.freedesktop.org/mesa/mesa/-/issues/14610
Signed-off-by: Dave Airlie <airlied@redhat.com>
---
 drivers/gpu/drm/nouveau/nvkm/subdev/mmu/vmm.c | 39 +++++++++++++++----
 drivers/gpu/drm/nouveau/nvkm/subdev/mmu/vmm.h |  3 +-
 2 files changed, 33 insertions(+), 9 deletions(-)

diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/mmu/vmm.c b/drivers/gpu/drm/nouveau/nvkm/subdev/mmu/vmm.c
index 44daeec0aa6d..19a7407cf702 100644
--- a/drivers/gpu/drm/nouveau/nvkm/subdev/mmu/vmm.c
+++ b/drivers/gpu/drm/nouveau/nvkm/subdev/mmu/vmm.c
@@ -242,14 +242,17 @@ nvkm_vmm_unref_sptes(struct nvkm_vmm_iter *it, struct nvkm_vmm_pt *pgt,
 		if (pgt->pte[pteb].s.sparse) {
 			TRA(it, "LPTE %05x: U -> S %d PTEs", pteb, ptes);
 			pair->func->sparse(vmm, pgt->pt[0], pteb, ptes);
-		} else
-		if (pair->func->invalid) {
-			/* If the MMU supports it, restore the LPTE to the
-			 * INVALID state to tell the MMU there is no point
-			 * trying to fetch the corresponding SPTEs.
-			 */
-			TRA(it, "LPTE %05x: U -> I %d PTEs", pteb, ptes);
-			pair->func->invalid(vmm, pgt->pt[0], pteb, ptes);
+		} else if (!pgt->pte[pteb].s.lpte_valid) {
+			if (pair->func->invalid) {
+				/* If the MMU supports it, restore the LPTE to the
+				 * INVALID state to tell the MMU there is no point
+				 * trying to fetch the corresponding SPTEs.
+				 */
+				TRA(it, "LPTE %05x: U -> I %d PTEs", pteb, ptes);
+				pair->func->invalid(vmm, pgt->pt[0], pteb, ptes);
+			}
+		} else {
+			TRA(it, "LPTE %05x: V %d PTEs", pteb, ptes);
 		}
 	}
 }
@@ -280,6 +283,15 @@ nvkm_vmm_unref_ptes(struct nvkm_vmm_iter *it, bool pfn, u32 ptei, u32 ptes)
 	if (desc->type == SPT && (pgt->refs[0] || pgt->refs[1]))
 		nvkm_vmm_unref_sptes(it, pgt, desc, ptei, ptes);
 
+	if (desc->type == LPT && (pgt->refs[0] || pgt->refs[1])) {
+		for (u32 lpti = ptei; ptes; lpti++) {
+			pgt->pte[lpti].s.lptes--;
+			if (pgt->pte[lpti].s.lptes == 0)
+				pgt->pte[lpti].s.lpte_valid = false;
+			ptes--;
+		}
+	}
+
 	/* PT no longer needed? Destroy it. */
 	if (!pgt->refs[type]) {
 		it->lvl++;
@@ -332,10 +344,12 @@ nvkm_vmm_ref_sptes(struct nvkm_vmm_iter *it, struct nvkm_vmm_pt *pgt,
 		 * Determine how many LPTEs need to transition state.
 		 */
 		pgt->pte[ptei].s.spte_valid = true;
+		pgt->pte[ptei].s.lpte_valid = false;
 		for (ptes = 1, ptei++; ptei < lpti; ptes++, ptei++) {
 			if (pgt->pte[ptei].s.spte_valid)
 				break;
 			pgt->pte[ptei].s.spte_valid = true;
+			pgt->pte[ptei].s.lpte_valid = false;
 		}
 
 		if (pgt->pte[pteb].s.sparse) {
@@ -374,6 +388,15 @@ nvkm_vmm_ref_ptes(struct nvkm_vmm_iter *it, bool pfn, u32 ptei, u32 ptes)
 	if (desc->type == SPT)
 		nvkm_vmm_ref_sptes(it, pgt, desc, ptei, ptes);
 
+	if (desc->type == LPT) {
+		for (u32 lpti = ptei; ptes; lpti++) {
+			pgt->pte[lpti].s.spte_valid = false;
+			pgt->pte[lpti].s.lpte_valid = true;
+			pgt->pte[lpti].s.lptes++;
+			ptes--;
+		}
+	}
+
 	return true;
 }
 
diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/mmu/vmm.h b/drivers/gpu/drm/nouveau/nvkm/subdev/mmu/vmm.h
index a8b08126e8dc..4ec0a3a21169 100644
--- a/drivers/gpu/drm/nouveau/nvkm/subdev/mmu/vmm.h
+++ b/drivers/gpu/drm/nouveau/nvkm/subdev/mmu/vmm.h
@@ -9,7 +9,8 @@ union nvkm_pte_tracker {
 	struct {
 		u32 sparse:1;
 		u32 spte_valid:1;
-		u32 padding:14;
+		u32 lpte_valid:1;
+		u32 lptes:13;
 		u32 sptes:16;
 	} s;
 };
-- 
2.52.0


^ permalink raw reply related	[flat|nested] 6+ messages in thread

* Re: [PATCH 0/3] nouveau/vmm: fix switching between small and large PTEs (series v2)
  2026-02-04  3:00 [PATCH 0/3] nouveau/vmm: fix switching between small and large PTEs (series v2) Dave Airlie
                   ` (2 preceding siblings ...)
  2026-02-04  3:00 ` [PATCH 3/3] nouveau/vmm: start tracking if the LPT PTE is valid. (v6) Dave Airlie
@ 2026-02-04 12:43 ` Mary Guillemard
  2026-02-04 16:40   ` M Henning
  3 siblings, 1 reply; 6+ messages in thread
From: Mary Guillemard @ 2026-02-04 12:43 UTC (permalink / raw)
  To: Dave Airlie; +Cc: dri-devel, nouveau

On Wed, Feb 04, 2026 at 01:00:04PM +1000, Dave Airlie wrote:
> [This is a repost with a fix for a bug noticed in patch 2 from yesterday.]
> 
> The nouveau page table has dual page tables with special states for
> tracking small vs large pages at the bottom level. However the current
> code isn't designed with the higher level large page support in mind.
> 
> The nouveau_uvmm/gpuvm code can cause unrefs to get delayed, so things
> like ref SPT, map SPT, unmap SPT, ref LPT, map LPT, unref SPT can happen.
> 
> unrefs can end up quite delayed and it shouldn't matter as unref should
> just affect reference counts.
> 
> However at least the SPT unref path was overwriting the LPT value when
> all SPT were unreffed even if an LPT was referenced in between.
> 
> This series refactors the code to use a union, then increases the size
> as I think even with the current code there was enough ref counts for SPTE.
> The last patch adds LPTE tracking.
> 
> Dave.
>

I extensively tested this today (on GA107 and AD107) with compression
reenabled on mesa side, everything is working as expected and the MMU
faults are gone.

Reviewed-by: Mary Guillemard <mary@mary.zone>
Tested-by: Mary Guillemard <mary@mary.zone>

Regards,
Mary

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH 0/3] nouveau/vmm: fix switching between small and large PTEs (series v2)
  2026-02-04 12:43 ` [PATCH 0/3] nouveau/vmm: fix switching between small and large PTEs (series v2) Mary Guillemard
@ 2026-02-04 16:40   ` M Henning
  0 siblings, 0 replies; 6+ messages in thread
From: M Henning @ 2026-02-04 16:40 UTC (permalink / raw)
  To: Mary Guillemard; +Cc: dri-devel, nouveau

I also tested this and it fixes the issues I was seeing in both cts
and my reproducer script.

Tested-by: Mel Henning <mhenning@darkrefraction.com>

^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2026-02-04 16:41 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-02-04  3:00 [PATCH 0/3] nouveau/vmm: fix switching between small and large PTEs (series v2) Dave Airlie
2026-02-04  3:00 ` [PATCH 1/3] nouveau/vmm: rewrite pte tracker using a struct and bitfields Dave Airlie
2026-02-04  3:00 ` [PATCH 2/3] nouveau/vmm: increase size of vmm pte tracker struct to u32 (v2) Dave Airlie
2026-02-04  3:00 ` [PATCH 3/3] nouveau/vmm: start tracking if the LPT PTE is valid. (v6) Dave Airlie
2026-02-04 12:43 ` [PATCH 0/3] nouveau/vmm: fix switching between small and large PTEs (series v2) Mary Guillemard
2026-02-04 16:40   ` M Henning

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox