* [PATCH v4 0/6] drm/gem-shmem: Track page accessed/dirty status
@ 2026-02-27 11:42 Thomas Zimmermann
2026-02-27 11:42 ` [PATCH v4 1/6] drm/gem-shmem: Use obj directly where appropriate in fault handler Thomas Zimmermann
` (5 more replies)
0 siblings, 6 replies; 35+ messages in thread
From: Thomas Zimmermann @ 2026-02-27 11:42 UTC (permalink / raw)
To: boris.brezillon, loic.molinari, willy, frank.binns, matt.coster,
maarten.lankhorst, mripard, airlied, simona, linux-mm
Cc: dri-devel, Thomas Zimmermann
Track page access/dirty status in gem-shmem for better integration with
the overall memory management. Gem-shmem has long had two flag bits in
struct drm_gem_shmem_object, named pages_mark_accessed_on_put and
pages_mark_dirty_on_put, but never used them much; except for some odd
cases in drivers. Therefore pages in gem-shmem where never marked
correctly. (Other DRM memory managers do some course-grain tracking at
least).
Patches 1 to 4 prepare the mmap page-fault handler for tracking page
status easily. The pages are already available; only the mmap handling
needs to be adapted. The way the shmem code interacts with huge-page
support is also not optimal, hence refactor it slightly.
Patch 5 adds tracking access and dirty status in mmap. With the earlier
patches, this change simply falls into place.
Patch 6 adds tracking access and dirty status in vmap. Because there's
no fault handling here, we refer to the existing status bits in struct
drm_gem_shmem_object. Each page's status will be updated by the page
release in drm_gem_put_pages(). The imagination driver requires a small
fix to make it work correctly.
Tested with CONFIG_VM=y by running animations on DRM's bochs driver for
several hours. This uses gem-shmem's mmap and vmap extensively.
v4:
- mark folio as accessed on VM_FAULT_NOPAGE (Boris)
- validate state in mkwrite (Boris)
v3:
- rewrite for VM_PFNMAP
- do more preparational patches
v2:
- fix possible OOB access into page array (Matthew)
- simplify fault-handler error handling (Boris)
- simplify internal interfaces (Matthew)
Thomas Zimmermann (6):
drm/gem-shmem: Use obj directly where appropriate in fault handler
drm/gem-shmem: Test for existence of page in mmap fault handler
drm/gem-shmem: Return vm_fault_t from drm_gem_shmem_try_map_pmd()
drm/gem-shmem: Refactor drm_gem_shmem_try_map_pmd()
drm/gem-shmem: Track folio accessed/dirty status in mmap
drm/gem-shmem: Track folio accessed/dirty status in vmap
drivers/gpu/drm/drm_gem_shmem_helper.c | 79 +++++++++++++++++---------
drivers/gpu/drm/imagination/pvr_gem.c | 6 +-
2 files changed, 55 insertions(+), 30 deletions(-)
base-commit: 1c44015babd759b8e5234084dffcc08a0b784333
--
2.52.0
^ permalink raw reply [flat|nested] 35+ messages in thread
* [PATCH v4 1/6] drm/gem-shmem: Use obj directly where appropriate in fault handler
2026-02-27 11:42 [PATCH v4 0/6] drm/gem-shmem: Track page accessed/dirty status Thomas Zimmermann
@ 2026-02-27 11:42 ` Thomas Zimmermann
2026-02-27 11:42 ` [PATCH v4 2/6] drm/gem-shmem: Test for existence of page in mmap " Thomas Zimmermann
` (4 subsequent siblings)
5 siblings, 0 replies; 35+ messages in thread
From: Thomas Zimmermann @ 2026-02-27 11:42 UTC (permalink / raw)
To: boris.brezillon, loic.molinari, willy, frank.binns, matt.coster,
maarten.lankhorst, mripard, airlied, simona, linux-mm
Cc: dri-devel, Thomas Zimmermann
Replace shmem->base with obj in several places. It is the same value,
but the latter is easier to read.
Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
Reviewed-by: Boris Brezillon <boris.brezillon@collabora.com>
---
drivers/gpu/drm/drm_gem_shmem_helper.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c
index 7b5a49935ae4..1e3bfbf0cb97 100644
--- a/drivers/gpu/drm/drm_gem_shmem_helper.c
+++ b/drivers/gpu/drm/drm_gem_shmem_helper.c
@@ -584,7 +584,7 @@ static vm_fault_t drm_gem_shmem_fault(struct vm_fault *vmf)
/* Offset to faulty address in the VMA. */
page_offset = vmf->pgoff - vma->vm_pgoff;
- dma_resv_lock(shmem->base.resv, NULL);
+ dma_resv_lock(obj->resv, NULL);
if (page_offset >= num_pages ||
drm_WARN_ON_ONCE(obj->dev, !shmem->pages) ||
@@ -602,7 +602,7 @@ static vm_fault_t drm_gem_shmem_fault(struct vm_fault *vmf)
ret = vmf_insert_pfn(vma, vmf->address, pfn);
out:
- dma_resv_unlock(shmem->base.resv);
+ dma_resv_unlock(obj->resv);
return ret;
}
--
2.52.0
^ permalink raw reply related [flat|nested] 35+ messages in thread
* [PATCH v4 2/6] drm/gem-shmem: Test for existence of page in mmap fault handler
2026-02-27 11:42 [PATCH v4 0/6] drm/gem-shmem: Track page accessed/dirty status Thomas Zimmermann
2026-02-27 11:42 ` [PATCH v4 1/6] drm/gem-shmem: Use obj directly where appropriate in fault handler Thomas Zimmermann
@ 2026-02-27 11:42 ` Thomas Zimmermann
2026-02-27 11:42 ` [PATCH v4 3/6] drm/gem-shmem: Return vm_fault_t from drm_gem_shmem_try_map_pmd() Thomas Zimmermann
` (3 subsequent siblings)
5 siblings, 0 replies; 35+ messages in thread
From: Thomas Zimmermann @ 2026-02-27 11:42 UTC (permalink / raw)
To: boris.brezillon, loic.molinari, willy, frank.binns, matt.coster,
maarten.lankhorst, mripard, airlied, simona, linux-mm
Cc: dri-devel, Thomas Zimmermann
Not having a page pointer in the mmap fault handler is an error. Test
for this situation and return VM_FAULT_SIGBUS if so. Also replace several
lookups of the page with a local variable.
Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
Reviewed-by: Boris Brezillon <boris.brezillon@collabora.com>
---
drivers/gpu/drm/drm_gem_shmem_helper.c | 24 ++++++++++++------------
1 file changed, 12 insertions(+), 12 deletions(-)
diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c
index 1e3bfbf0cb97..cf5361946030 100644
--- a/drivers/gpu/drm/drm_gem_shmem_helper.c
+++ b/drivers/gpu/drm/drm_gem_shmem_helper.c
@@ -574,31 +574,31 @@ static vm_fault_t drm_gem_shmem_fault(struct vm_fault *vmf)
{
struct vm_area_struct *vma = vmf->vma;
struct drm_gem_object *obj = vma->vm_private_data;
+ struct drm_device *dev = obj->dev;
struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj);
loff_t num_pages = obj->size >> PAGE_SHIFT;
- vm_fault_t ret;
+ vm_fault_t ret = VM_FAULT_SIGBUS;
struct page **pages = shmem->pages;
- pgoff_t page_offset;
+ pgoff_t page_offset = vmf->pgoff - vma->vm_pgoff; /* page offset within VMA */
+ struct page *page;
unsigned long pfn;
- /* Offset to faulty address in the VMA. */
- page_offset = vmf->pgoff - vma->vm_pgoff;
-
dma_resv_lock(obj->resv, NULL);
- if (page_offset >= num_pages ||
- drm_WARN_ON_ONCE(obj->dev, !shmem->pages) ||
- shmem->madv < 0) {
- ret = VM_FAULT_SIGBUS;
+ if (page_offset >= num_pages || drm_WARN_ON_ONCE(dev, !shmem->pages) ||
+ shmem->madv < 0)
+ goto out;
+
+ page = pages[page_offset];
+ if (drm_WARN_ON_ONCE(dev, !page))
goto out;
- }
- if (drm_gem_shmem_try_map_pmd(vmf, vmf->address, pages[page_offset])) {
+ if (drm_gem_shmem_try_map_pmd(vmf, vmf->address, page)) {
ret = VM_FAULT_NOPAGE;
goto out;
}
- pfn = page_to_pfn(pages[page_offset]);
+ pfn = page_to_pfn(page);
ret = vmf_insert_pfn(vma, vmf->address, pfn);
out:
--
2.52.0
^ permalink raw reply related [flat|nested] 35+ messages in thread
* [PATCH v4 3/6] drm/gem-shmem: Return vm_fault_t from drm_gem_shmem_try_map_pmd()
2026-02-27 11:42 [PATCH v4 0/6] drm/gem-shmem: Track page accessed/dirty status Thomas Zimmermann
2026-02-27 11:42 ` [PATCH v4 1/6] drm/gem-shmem: Use obj directly where appropriate in fault handler Thomas Zimmermann
2026-02-27 11:42 ` [PATCH v4 2/6] drm/gem-shmem: Test for existence of page in mmap " Thomas Zimmermann
@ 2026-02-27 11:42 ` Thomas Zimmermann
2026-02-27 11:42 ` [PATCH v4 4/6] drm/gem-shmem: Refactor drm_gem_shmem_try_map_pmd() Thomas Zimmermann
` (2 subsequent siblings)
5 siblings, 0 replies; 35+ messages in thread
From: Thomas Zimmermann @ 2026-02-27 11:42 UTC (permalink / raw)
To: boris.brezillon, loic.molinari, willy, frank.binns, matt.coster,
maarten.lankhorst, mripard, airlied, simona, linux-mm
Cc: dri-devel, Thomas Zimmermann
Return the exact VM_FAULT_ mask from drm_gem_shmem_try_map_pmd(). Gives
the caller better insight into the result. Return 0 if nothing was done.
If the caller sees VM_FAULT_NOPAGE, drm_gem_shmem_try_map_pmd() added a
PMD entry to the page table. As before, return early from the page-fault
handler in that case.
Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
Suggested-by: Matthew Wilcox <willy@infradead.org>
Reviewed-by: Boris Brezillon <boris.brezillon@collabora.com>
---
drivers/gpu/drm/drm_gem_shmem_helper.c | 14 ++++++--------
1 file changed, 6 insertions(+), 8 deletions(-)
diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c
index cf5361946030..3c261b53c974 100644
--- a/drivers/gpu/drm/drm_gem_shmem_helper.c
+++ b/drivers/gpu/drm/drm_gem_shmem_helper.c
@@ -550,8 +550,8 @@ int drm_gem_shmem_dumb_create(struct drm_file *file, struct drm_device *dev,
}
EXPORT_SYMBOL_GPL(drm_gem_shmem_dumb_create);
-static bool drm_gem_shmem_try_map_pmd(struct vm_fault *vmf, unsigned long addr,
- struct page *page)
+static vm_fault_t drm_gem_shmem_try_map_pmd(struct vm_fault *vmf, unsigned long addr,
+ struct page *page)
{
#ifdef CONFIG_ARCH_SUPPORTS_PMD_PFNMAP
unsigned long pfn = page_to_pfn(page);
@@ -562,12 +562,11 @@ static bool drm_gem_shmem_try_map_pmd(struct vm_fault *vmf, unsigned long addr,
pmd_none(*vmf->pmd) &&
folio_test_pmd_mappable(page_folio(page))) {
pfn &= PMD_MASK >> PAGE_SHIFT;
- if (vmf_insert_pfn_pmd(vmf, pfn, false) == VM_FAULT_NOPAGE)
- return true;
+ return vmf_insert_pfn_pmd(vmf, pfn, false);
}
#endif
- return false;
+ return 0;
}
static vm_fault_t drm_gem_shmem_fault(struct vm_fault *vmf)
@@ -593,10 +592,9 @@ static vm_fault_t drm_gem_shmem_fault(struct vm_fault *vmf)
if (drm_WARN_ON_ONCE(dev, !page))
goto out;
- if (drm_gem_shmem_try_map_pmd(vmf, vmf->address, page)) {
- ret = VM_FAULT_NOPAGE;
+ ret = drm_gem_shmem_try_map_pmd(vmf, vmf->address, page);
+ if (ret == VM_FAULT_NOPAGE)
goto out;
- }
pfn = page_to_pfn(page);
ret = vmf_insert_pfn(vma, vmf->address, pfn);
--
2.52.0
^ permalink raw reply related [flat|nested] 35+ messages in thread
* [PATCH v4 4/6] drm/gem-shmem: Refactor drm_gem_shmem_try_map_pmd()
2026-02-27 11:42 [PATCH v4 0/6] drm/gem-shmem: Track page accessed/dirty status Thomas Zimmermann
` (2 preceding siblings ...)
2026-02-27 11:42 ` [PATCH v4 3/6] drm/gem-shmem: Return vm_fault_t from drm_gem_shmem_try_map_pmd() Thomas Zimmermann
@ 2026-02-27 11:42 ` Thomas Zimmermann
2026-02-27 11:42 ` [PATCH v4 5/6] drm/gem-shmem: Track folio accessed/dirty status in mmap Thomas Zimmermann
2026-02-27 11:42 ` [PATCH v4 6/6] drm/gem-shmem: Track folio accessed/dirty status in vmap Thomas Zimmermann
5 siblings, 0 replies; 35+ messages in thread
From: Thomas Zimmermann @ 2026-02-27 11:42 UTC (permalink / raw)
To: boris.brezillon, loic.molinari, willy, frank.binns, matt.coster,
maarten.lankhorst, mripard, airlied, simona, linux-mm
Cc: dri-devel, Thomas Zimmermann
The current mmap page-fault handler requires some changes before it
can track folio access.
Call to folio_test_pmd_mappable() into the mmap page-fault handler
before calling drm_gem_shmem_try_map_pmd(). The folio will become
useful for tracking the access status.
Also rename drm_gem_shmem_try_map_pmd() to _try_insert_pfn_pmd()
and only pass the page fault and page-frame number. The new name and
parameters make it similar to vmf_insert_pfn_pmd().
No functional changes. If PMD mapping fails or is not supported,
insert a regular PFN as before.
Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
Reviewed-by: Boris Brezillon <boris.brezillon@collabora.com>
---
drivers/gpu/drm/drm_gem_shmem_helper.c | 25 ++++++++++++-------------
1 file changed, 12 insertions(+), 13 deletions(-)
diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c
index 3c261b53c974..cefa50eaf7a4 100644
--- a/drivers/gpu/drm/drm_gem_shmem_helper.c
+++ b/drivers/gpu/drm/drm_gem_shmem_helper.c
@@ -550,17 +550,14 @@ int drm_gem_shmem_dumb_create(struct drm_file *file, struct drm_device *dev,
}
EXPORT_SYMBOL_GPL(drm_gem_shmem_dumb_create);
-static vm_fault_t drm_gem_shmem_try_map_pmd(struct vm_fault *vmf, unsigned long addr,
- struct page *page)
+static vm_fault_t drm_gem_shmem_try_insert_pfn_pmd(struct vm_fault *vmf, unsigned long pfn)
{
#ifdef CONFIG_ARCH_SUPPORTS_PMD_PFNMAP
- unsigned long pfn = page_to_pfn(page);
unsigned long paddr = pfn << PAGE_SHIFT;
- bool aligned = (addr & ~PMD_MASK) == (paddr & ~PMD_MASK);
+ bool aligned = (vmf->address & ~PMD_MASK) == (paddr & ~PMD_MASK);
- if (aligned &&
- pmd_none(*vmf->pmd) &&
- folio_test_pmd_mappable(page_folio(page))) {
+ if (aligned && pmd_none(*vmf->pmd)) {
+ /* Read-only mapping; split upon write fault */
pfn &= PMD_MASK >> PAGE_SHIFT;
return vmf_insert_pfn_pmd(vmf, pfn, false);
}
@@ -580,6 +577,7 @@ static vm_fault_t drm_gem_shmem_fault(struct vm_fault *vmf)
struct page **pages = shmem->pages;
pgoff_t page_offset = vmf->pgoff - vma->vm_pgoff; /* page offset within VMA */
struct page *page;
+ struct folio *folio;
unsigned long pfn;
dma_resv_lock(obj->resv, NULL);
@@ -591,15 +589,16 @@ static vm_fault_t drm_gem_shmem_fault(struct vm_fault *vmf)
page = pages[page_offset];
if (drm_WARN_ON_ONCE(dev, !page))
goto out;
-
- ret = drm_gem_shmem_try_map_pmd(vmf, vmf->address, page);
- if (ret == VM_FAULT_NOPAGE)
- goto out;
+ folio = page_folio(page);
pfn = page_to_pfn(page);
- ret = vmf_insert_pfn(vma, vmf->address, pfn);
- out:
+ if (folio_test_pmd_mappable(folio))
+ ret = drm_gem_shmem_try_insert_pfn_pmd(vmf, pfn);
+ if (ret != VM_FAULT_NOPAGE)
+ ret = vmf_insert_pfn(vma, vmf->address, pfn);
+
+out:
dma_resv_unlock(obj->resv);
return ret;
--
2.52.0
^ permalink raw reply related [flat|nested] 35+ messages in thread
* [PATCH v4 5/6] drm/gem-shmem: Track folio accessed/dirty status in mmap
2026-02-27 11:42 [PATCH v4 0/6] drm/gem-shmem: Track page accessed/dirty status Thomas Zimmermann
` (3 preceding siblings ...)
2026-02-27 11:42 ` [PATCH v4 4/6] drm/gem-shmem: Refactor drm_gem_shmem_try_map_pmd() Thomas Zimmermann
@ 2026-02-27 11:42 ` Thomas Zimmermann
2026-03-12 17:36 ` Tommaso Merciai
2026-02-27 11:42 ` [PATCH v4 6/6] drm/gem-shmem: Track folio accessed/dirty status in vmap Thomas Zimmermann
5 siblings, 1 reply; 35+ messages in thread
From: Thomas Zimmermann @ 2026-02-27 11:42 UTC (permalink / raw)
To: boris.brezillon, loic.molinari, willy, frank.binns, matt.coster,
maarten.lankhorst, mripard, airlied, simona, linux-mm
Cc: dri-devel, Thomas Zimmermann
Invoke folio_mark_accessed() in mmap page faults to add the folio to
the memory manager's LRU list. Userspace invokes mmap to get the memory
for software rendering. Compositors do the same when creating the final
on-screen image, so keeping the pages in LRU makes sense. Avoids paging
out graphics buffers when under memory pressure.
In pfn_mkwrite, further invoke the folio_mark_dirty() to add the folio
for writeback should the underlying file be paged out from system memory.
This rarely happens in practice, yet it would corrupt the buffer content.
This has little effect on a system's hardware-accelerated rendering, which
only mmaps for an initial setup of textures, meshes, shaders, etc.
v4:
- test for VM_FAULT_NOPAGE before marking folio as accessed (Boris)
- test page-array bounds in mkwrite handler (Boris)
v3:
- rewrite for VM_PFNMAP
v2:
- adapt to changes in drm_gem_shmem_try_mmap_pmd()
Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
Reviewed-by: Boris Brezillon <boris.brezillon@collabora.com>
---
drivers/gpu/drm/drm_gem_shmem_helper.c | 22 ++++++++++++++++++++++
1 file changed, 22 insertions(+)
diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c
index cefa50eaf7a4..1ab2bbd3860c 100644
--- a/drivers/gpu/drm/drm_gem_shmem_helper.c
+++ b/drivers/gpu/drm/drm_gem_shmem_helper.c
@@ -598,6 +598,9 @@ static vm_fault_t drm_gem_shmem_fault(struct vm_fault *vmf)
if (ret != VM_FAULT_NOPAGE)
ret = vmf_insert_pfn(vma, vmf->address, pfn);
+ if (ret == VM_FAULT_NOPAGE)
+ folio_mark_accessed(folio);
+
out:
dma_resv_unlock(obj->resv);
@@ -638,10 +641,29 @@ static void drm_gem_shmem_vm_close(struct vm_area_struct *vma)
drm_gem_vm_close(vma);
}
+static vm_fault_t drm_gem_shmem_pfn_mkwrite(struct vm_fault *vmf)
+{
+ struct vm_area_struct *vma = vmf->vma;
+ struct drm_gem_object *obj = vma->vm_private_data;
+ struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj);
+ loff_t num_pages = obj->size >> PAGE_SHIFT;
+ pgoff_t page_offset = vmf->pgoff - vma->vm_pgoff; /* page offset within VMA */
+
+ if (drm_WARN_ON(obj->dev, !shmem->pages || page_offset >= num_pages))
+ return VM_FAULT_SIGBUS;
+
+ file_update_time(vma->vm_file);
+
+ folio_mark_dirty(page_folio(shmem->pages[page_offset]));
+
+ return 0;
+}
+
const struct vm_operations_struct drm_gem_shmem_vm_ops = {
.fault = drm_gem_shmem_fault,
.open = drm_gem_shmem_vm_open,
.close = drm_gem_shmem_vm_close,
+ .pfn_mkwrite = drm_gem_shmem_pfn_mkwrite,
};
EXPORT_SYMBOL_GPL(drm_gem_shmem_vm_ops);
--
2.52.0
^ permalink raw reply related [flat|nested] 35+ messages in thread
* [PATCH v4 6/6] drm/gem-shmem: Track folio accessed/dirty status in vmap
2026-02-27 11:42 [PATCH v4 0/6] drm/gem-shmem: Track page accessed/dirty status Thomas Zimmermann
` (4 preceding siblings ...)
2026-02-27 11:42 ` [PATCH v4 5/6] drm/gem-shmem: Track folio accessed/dirty status in mmap Thomas Zimmermann
@ 2026-02-27 11:42 ` Thomas Zimmermann
5 siblings, 0 replies; 35+ messages in thread
From: Thomas Zimmermann @ 2026-02-27 11:42 UTC (permalink / raw)
To: boris.brezillon, loic.molinari, willy, frank.binns, matt.coster,
maarten.lankhorst, mripard, airlied, simona, linux-mm
Cc: dri-devel, Thomas Zimmermann
On successful vmap, set the page_mark_accessed_on_put and _dirty_on_put
flags in the gem-shmem object. Signals that the contained pages require
LRU and dirty tracking when they are being released back to SHMEM. Clear
these flags on put, so that the buffer remains quiet until the next call
to vmap. There's no means of handling dirty status in vmap as there's no
write-only mapping available.
Both flags, _accessed_on_put and _dirty_on_put, have always been part of
the gem-shmem object, but never used much. So most drivers did not track
the page status correctly.
Only the v3d and imagination drivers make limited use of _dirty_on_put. In
the case of imagination, move the flag setting from init to cleanup. This
ensures writeback of modified pages but does not interfere with the
internal vmap/vunmap calls. V3d already implements this behaviour.
Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
Reviewed-by: Boris Brezillon <boris.brezillon@collabora.com> # gem-shmem
Acked-by: Frank Binns <frank.binns@imgtec.com> # imagination
Tested-by: Frank Binns <frank.binns@imgtec.com> # imagination
---
drivers/gpu/drm/drm_gem_shmem_helper.c | 4 ++++
drivers/gpu/drm/imagination/pvr_gem.c | 6 ++++--
2 files changed, 8 insertions(+), 2 deletions(-)
diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c
index 1ab2bbd3860c..4500deef4127 100644
--- a/drivers/gpu/drm/drm_gem_shmem_helper.c
+++ b/drivers/gpu/drm/drm_gem_shmem_helper.c
@@ -265,6 +265,8 @@ void drm_gem_shmem_put_pages_locked(struct drm_gem_shmem_object *shmem)
shmem->pages_mark_dirty_on_put,
shmem->pages_mark_accessed_on_put);
shmem->pages = NULL;
+ shmem->pages_mark_accessed_on_put = false;
+ shmem->pages_mark_dirty_on_put = false;
}
}
EXPORT_SYMBOL_GPL(drm_gem_shmem_put_pages_locked);
@@ -397,6 +399,8 @@ int drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem,
} else {
iosys_map_set_vaddr(map, shmem->vaddr);
refcount_set(&shmem->vmap_use_count, 1);
+ shmem->pages_mark_accessed_on_put = true;
+ shmem->pages_mark_dirty_on_put = true;
}
}
diff --git a/drivers/gpu/drm/imagination/pvr_gem.c b/drivers/gpu/drm/imagination/pvr_gem.c
index 686a3fe22986..d8660d6a8e01 100644
--- a/drivers/gpu/drm/imagination/pvr_gem.c
+++ b/drivers/gpu/drm/imagination/pvr_gem.c
@@ -25,7 +25,10 @@
static void pvr_gem_object_free(struct drm_gem_object *obj)
{
- drm_gem_shmem_object_free(obj);
+ struct drm_gem_shmem_object *shmem_obj = to_drm_gem_shmem_obj(obj);
+
+ shmem_obj->pages_mark_dirty_on_put = true;
+ drm_gem_shmem_free(shmem_obj);
}
static struct dma_buf *pvr_gem_export(struct drm_gem_object *obj, int flags)
@@ -363,7 +366,6 @@ pvr_gem_object_create(struct pvr_device *pvr_dev, size_t size, u64 flags)
if (IS_ERR(shmem_obj))
return ERR_CAST(shmem_obj);
- shmem_obj->pages_mark_dirty_on_put = true;
shmem_obj->map_wc = !(flags & PVR_BO_CPU_CACHED);
pvr_obj = shmem_gem_to_pvr_gem(shmem_obj);
pvr_obj->flags = flags;
--
2.52.0
^ permalink raw reply related [flat|nested] 35+ messages in thread
* Re: [PATCH v4 5/6] drm/gem-shmem: Track folio accessed/dirty status in mmap
2026-02-27 11:42 ` [PATCH v4 5/6] drm/gem-shmem: Track folio accessed/dirty status in mmap Thomas Zimmermann
@ 2026-03-12 17:36 ` Tommaso Merciai
2026-03-12 17:46 ` Biju Das
0 siblings, 1 reply; 35+ messages in thread
From: Tommaso Merciai @ 2026-03-12 17:36 UTC (permalink / raw)
To: Thomas Zimmermann
Cc: boris.brezillon, loic.molinari, willy, frank.binns, matt.coster,
maarten.lankhorst, mripard, airlied, simona, linux-mm, dri-devel,
biju.das.jz
Hi Thomas,
Thanks for your patch.
I'm working on DSI support for RZ/G3E from this morning rebasing on
top of next-20260311 I'm seeing that weston hang on my side:
Reverting this patch fix the issue.
(git revert 28e3918179aa)
I'm wondering if anyone encountered this issue?
Thanks in advance.
Kind Regards,
Tommaso
On Fri, Feb 27, 2026 at 12:42:10PM +0100, Thomas Zimmermann wrote:
> Invoke folio_mark_accessed() in mmap page faults to add the folio to
> the memory manager's LRU list. Userspace invokes mmap to get the memory
> for software rendering. Compositors do the same when creating the final
> on-screen image, so keeping the pages in LRU makes sense. Avoids paging
> out graphics buffers when under memory pressure.
>
> In pfn_mkwrite, further invoke the folio_mark_dirty() to add the folio
> for writeback should the underlying file be paged out from system memory.
> This rarely happens in practice, yet it would corrupt the buffer content.
>
> This has little effect on a system's hardware-accelerated rendering, which
> only mmaps for an initial setup of textures, meshes, shaders, etc.
>
> v4:
> - test for VM_FAULT_NOPAGE before marking folio as accessed (Boris)
> - test page-array bounds in mkwrite handler (Boris)
> v3:
> - rewrite for VM_PFNMAP
> v2:
> - adapt to changes in drm_gem_shmem_try_mmap_pmd()
>
> Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
> Reviewed-by: Boris Brezillon <boris.brezillon@collabora.com>
> ---
> drivers/gpu/drm/drm_gem_shmem_helper.c | 22 ++++++++++++++++++++++
> 1 file changed, 22 insertions(+)
>
> diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c
> index cefa50eaf7a4..1ab2bbd3860c 100644
> --- a/drivers/gpu/drm/drm_gem_shmem_helper.c
> +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c
> @@ -598,6 +598,9 @@ static vm_fault_t drm_gem_shmem_fault(struct vm_fault *vmf)
> if (ret != VM_FAULT_NOPAGE)
> ret = vmf_insert_pfn(vma, vmf->address, pfn);
>
> + if (ret == VM_FAULT_NOPAGE)
> + folio_mark_accessed(folio);
> +
> out:
> dma_resv_unlock(obj->resv);
>
> @@ -638,10 +641,29 @@ static void drm_gem_shmem_vm_close(struct vm_area_struct *vma)
> drm_gem_vm_close(vma);
> }
>
> +static vm_fault_t drm_gem_shmem_pfn_mkwrite(struct vm_fault *vmf)
> +{
> + struct vm_area_struct *vma = vmf->vma;
> + struct drm_gem_object *obj = vma->vm_private_data;
> + struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj);
> + loff_t num_pages = obj->size >> PAGE_SHIFT;
> + pgoff_t page_offset = vmf->pgoff - vma->vm_pgoff; /* page offset within VMA */
> +
> + if (drm_WARN_ON(obj->dev, !shmem->pages || page_offset >= num_pages))
> + return VM_FAULT_SIGBUS;
> +
> + file_update_time(vma->vm_file);
> +
> + folio_mark_dirty(page_folio(shmem->pages[page_offset]));
> +
> + return 0;
> +}
> +
> const struct vm_operations_struct drm_gem_shmem_vm_ops = {
> .fault = drm_gem_shmem_fault,
> .open = drm_gem_shmem_vm_open,
> .close = drm_gem_shmem_vm_close,
> + .pfn_mkwrite = drm_gem_shmem_pfn_mkwrite,
> };
> EXPORT_SYMBOL_GPL(drm_gem_shmem_vm_ops);
>
> --
> 2.52.0
>
^ permalink raw reply [flat|nested] 35+ messages in thread
* RE: [PATCH v4 5/6] drm/gem-shmem: Track folio accessed/dirty status in mmap
2026-03-12 17:36 ` Tommaso Merciai
@ 2026-03-12 17:46 ` Biju Das
2026-03-13 6:44 ` Biju Das
2026-03-13 8:33 ` Thomas Zimmermann
0 siblings, 2 replies; 35+ messages in thread
From: Biju Das @ 2026-03-12 17:46 UTC (permalink / raw)
To: Tommaso Merciai, Thomas Zimmermann
Cc: boris.brezillon@collabora.com, loic.molinari@collabora.com,
willy@infradead.org, frank.binns@imgtec.com,
matt.coster@imgtec.com, maarten.lankhorst@linux.intel.com,
mripard@kernel.org, airlied@gmail.com, simona@ffwll.ch,
linux-mm@kvack.org, dri-devel@lists.freedesktop.org
Hi Tommaso,
> -----Original Message-----
> From: Tommaso Merciai <tommaso.merciai.xr@bp.renesas.com>
> Sent: 12 March 2026 17:37
> Subject: Re: [PATCH v4 5/6] drm/gem-shmem: Track folio accessed/dirty status in mmap
>
> Hi Thomas,
> Thanks for your patch.
>
> I'm working on DSI support for RZ/G3E from this morning rebasing on top of next-20260311 I'm seeing
> that weston hang on my side:
>
> Reverting this patch fix the issue.
> (git revert 28e3918179aa)
>
> I'm wondering if anyone encountered this issue?
> Thanks in advance.
I am also seeing same issue on RZ/G3L with weston.
Cheers,
Biju
>
> Kind Regards,
> Tommaso
>
> On Fri, Feb 27, 2026 at 12:42:10PM +0100, Thomas Zimmermann wrote:
> > Invoke folio_mark_accessed() in mmap page faults to add the folio to
> > the memory manager's LRU list. Userspace invokes mmap to get the
> > memory for software rendering. Compositors do the same when creating
> > the final on-screen image, so keeping the pages in LRU makes sense.
> > Avoids paging out graphics buffers when under memory pressure.
> >
> > In pfn_mkwrite, further invoke the folio_mark_dirty() to add the folio
> > for writeback should the underlying file be paged out from system memory.
> > This rarely happens in practice, yet it would corrupt the buffer content.
> >
> > This has little effect on a system's hardware-accelerated rendering,
> > which only mmaps for an initial setup of textures, meshes, shaders, etc.
> >
> > v4:
> > - test for VM_FAULT_NOPAGE before marking folio as accessed (Boris)
> > - test page-array bounds in mkwrite handler (Boris)
> > v3:
> > - rewrite for VM_PFNMAP
> > v2:
> > - adapt to changes in drm_gem_shmem_try_mmap_pmd()
> >
> > Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
> > Reviewed-by: Boris Brezillon <boris.brezillon@collabora.com>
> > ---
> > drivers/gpu/drm/drm_gem_shmem_helper.c | 22 ++++++++++++++++++++++
> > 1 file changed, 22 insertions(+)
> >
> > diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c
> > b/drivers/gpu/drm/drm_gem_shmem_helper.c
> > index cefa50eaf7a4..1ab2bbd3860c 100644
> > --- a/drivers/gpu/drm/drm_gem_shmem_helper.c
> > +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c
> > @@ -598,6 +598,9 @@ static vm_fault_t drm_gem_shmem_fault(struct vm_fault *vmf)
> > if (ret != VM_FAULT_NOPAGE)
> > ret = vmf_insert_pfn(vma, vmf->address, pfn);
> >
> > + if (ret == VM_FAULT_NOPAGE)
> > + folio_mark_accessed(folio);
> > +
> > out:
> > dma_resv_unlock(obj->resv);
> >
> > @@ -638,10 +641,29 @@ static void drm_gem_shmem_vm_close(struct vm_area_struct *vma)
> > drm_gem_vm_close(vma);
> > }
> >
> > +static vm_fault_t drm_gem_shmem_pfn_mkwrite(struct vm_fault *vmf) {
> > + struct vm_area_struct *vma = vmf->vma;
> > + struct drm_gem_object *obj = vma->vm_private_data;
> > + struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj);
> > + loff_t num_pages = obj->size >> PAGE_SHIFT;
> > + pgoff_t page_offset = vmf->pgoff - vma->vm_pgoff; /* page offset
> > +within VMA */
> > +
> > + if (drm_WARN_ON(obj->dev, !shmem->pages || page_offset >= num_pages))
> > + return VM_FAULT_SIGBUS;
> > +
> > + file_update_time(vma->vm_file);
> > +
> > + folio_mark_dirty(page_folio(shmem->pages[page_offset]));
> > +
> > + return 0;
> > +}
> > +
> > const struct vm_operations_struct drm_gem_shmem_vm_ops = {
> > .fault = drm_gem_shmem_fault,
> > .open = drm_gem_shmem_vm_open,
> > .close = drm_gem_shmem_vm_close,
> > + .pfn_mkwrite = drm_gem_shmem_pfn_mkwrite,
> > };
> > EXPORT_SYMBOL_GPL(drm_gem_shmem_vm_ops);
> >
> > --
> > 2.52.0
> >
^ permalink raw reply [flat|nested] 35+ messages in thread
* RE: [PATCH v4 5/6] drm/gem-shmem: Track folio accessed/dirty status in mmap
2026-03-12 17:46 ` Biju Das
@ 2026-03-13 6:44 ` Biju Das
2026-03-13 8:00 ` Thomas Zimmermann
2026-03-13 10:18 ` Boris Brezillon
2026-03-13 8:33 ` Thomas Zimmermann
1 sibling, 2 replies; 35+ messages in thread
From: Biju Das @ 2026-03-13 6:44 UTC (permalink / raw)
To: Biju Das, Tommaso Merciai, Thomas Zimmermann
Cc: boris.brezillon@collabora.com, loic.molinari@collabora.com,
willy@infradead.org, frank.binns@imgtec.com,
matt.coster@imgtec.com, maarten.lankhorst@linux.intel.com,
mripard@kernel.org, airlied@gmail.com, simona@ffwll.ch,
linux-mm@kvack.org, dri-devel@lists.freedesktop.org
> -----Original Message-----
> From: dri-devel <dri-devel-bounces@lists.freedesktop.org> On Behalf Of Biju Das
> Sent: 12 March 2026 17:47
> To: Tommaso Merciai <tommaso.merciai.xr@bp.renesas.com>; Thomas Zimmermann <tzimmermann@suse.de>
> Cc: boris.brezillon@collabora.com; loic.molinari@collabora.com; willy@infradead.org;
> frank.binns@imgtec.com; matt.coster@imgtec.com; maarten.lankhorst@linux.intel.com; mripard@kernel.org;
> airlied@gmail.com; simona@ffwll.ch; linux-mm@kvack.org; dri-devel@lists.freedesktop.org
> Subject: RE: [PATCH v4 5/6] drm/gem-shmem: Track folio accessed/dirty status in mmap
>
> Hi Tommaso,
>
> > -----Original Message-----
> > From: Tommaso Merciai <tommaso.merciai.xr@bp.renesas.com>
> > Sent: 12 March 2026 17:37
> > Subject: Re: [PATCH v4 5/6] drm/gem-shmem: Track folio accessed/dirty
> > status in mmap
> >
> > Hi Thomas,
> > Thanks for your patch.
> >
> > I'm working on DSI support for RZ/G3E from this morning rebasing on
> > top of next-20260311 I'm seeing that weston hang on my side:
> >
> > Reverting this patch fix the issue.
> > (git revert 28e3918179aa)
> >
> > I'm wondering if anyone encountered this issue?
> > Thanks in advance.
>
>
> I am also seeing same issue on RZ/G3L with weston.
Just add I am using mesa with panfrost(Mali-G31) on RZ/G3L.
Disabling Mali-G31 renders weston desktop during boot.
Looks like this patch is creating some hang in panfrost driver
during weston launch.
Cheers,
Biju
>
> Cheers,
> Biju
>
>
>
> >
> > Kind Regards,
> > Tommaso
> >
> > On Fri, Feb 27, 2026 at 12:42:10PM +0100, Thomas Zimmermann wrote:
> > > Invoke folio_mark_accessed() in mmap page faults to add the folio to
> > > the memory manager's LRU list. Userspace invokes mmap to get the
> > > memory for software rendering. Compositors do the same when creating
> > > the final on-screen image, so keeping the pages in LRU makes sense.
> > > Avoids paging out graphics buffers when under memory pressure.
> > >
> > > In pfn_mkwrite, further invoke the folio_mark_dirty() to add the
> > > folio for writeback should the underlying file be paged out from system memory.
> > > This rarely happens in practice, yet it would corrupt the buffer content.
> > >
> > > This has little effect on a system's hardware-accelerated rendering,
> > > which only mmaps for an initial setup of textures, meshes, shaders, etc.
> > >
> > > v4:
> > > - test for VM_FAULT_NOPAGE before marking folio as accessed (Boris)
> > > - test page-array bounds in mkwrite handler (Boris)
> > > v3:
> > > - rewrite for VM_PFNMAP
> > > v2:
> > > - adapt to changes in drm_gem_shmem_try_mmap_pmd()
> > >
> > > Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
> > > Reviewed-by: Boris Brezillon <boris.brezillon@collabora.com>
> > > ---
> > > drivers/gpu/drm/drm_gem_shmem_helper.c | 22 ++++++++++++++++++++++
> > > 1 file changed, 22 insertions(+)
> > >
> > > diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c
> > > b/drivers/gpu/drm/drm_gem_shmem_helper.c
> > > index cefa50eaf7a4..1ab2bbd3860c 100644
> > > --- a/drivers/gpu/drm/drm_gem_shmem_helper.c
> > > +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c
> > > @@ -598,6 +598,9 @@ static vm_fault_t drm_gem_shmem_fault(struct vm_fault *vmf)
> > > if (ret != VM_FAULT_NOPAGE)
> > > ret = vmf_insert_pfn(vma, vmf->address, pfn);
> > >
> > > + if (ret == VM_FAULT_NOPAGE)
> > > + folio_mark_accessed(folio);
> > > +
> > > out:
> > > dma_resv_unlock(obj->resv);
> > >
> > > @@ -638,10 +641,29 @@ static void drm_gem_shmem_vm_close(struct vm_area_struct *vma)
> > > drm_gem_vm_close(vma);
> > > }
> > >
> > > +static vm_fault_t drm_gem_shmem_pfn_mkwrite(struct vm_fault *vmf) {
> > > + struct vm_area_struct *vma = vmf->vma;
> > > + struct drm_gem_object *obj = vma->vm_private_data;
> > > + struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj);
> > > + loff_t num_pages = obj->size >> PAGE_SHIFT;
> > > + pgoff_t page_offset = vmf->pgoff - vma->vm_pgoff; /* page offset
> > > +within VMA */
> > > +
> > > + if (drm_WARN_ON(obj->dev, !shmem->pages || page_offset >= num_pages))
> > > + return VM_FAULT_SIGBUS;
> > > +
> > > + file_update_time(vma->vm_file);
> > > +
> > > + folio_mark_dirty(page_folio(shmem->pages[page_offset]));
> > > +
> > > + return 0;
> > > +}
> > > +
> > > const struct vm_operations_struct drm_gem_shmem_vm_ops = {
> > > .fault = drm_gem_shmem_fault,
> > > .open = drm_gem_shmem_vm_open,
> > > .close = drm_gem_shmem_vm_close,
> > > + .pfn_mkwrite = drm_gem_shmem_pfn_mkwrite,
> > > };
> > > EXPORT_SYMBOL_GPL(drm_gem_shmem_vm_ops);
> > >
> > > --
> > > 2.52.0
> > >
^ permalink raw reply [flat|nested] 35+ messages in thread
* Re: [PATCH v4 5/6] drm/gem-shmem: Track folio accessed/dirty status in mmap
2026-03-13 6:44 ` Biju Das
@ 2026-03-13 8:00 ` Thomas Zimmermann
2026-03-13 8:41 ` Biju Das
2026-03-13 10:03 ` Biju Das
2026-03-13 10:18 ` Boris Brezillon
1 sibling, 2 replies; 35+ messages in thread
From: Thomas Zimmermann @ 2026-03-13 8:00 UTC (permalink / raw)
To: Biju Das, Tommaso Merciai
Cc: boris.brezillon@collabora.com, loic.molinari@collabora.com,
willy@infradead.org, frank.binns@imgtec.com,
matt.coster@imgtec.com, maarten.lankhorst@linux.intel.com,
mripard@kernel.org, airlied@gmail.com, simona@ffwll.ch,
linux-mm@kvack.org, dri-devel@lists.freedesktop.org
Hi
Am 13.03.26 um 07:44 schrieb Biju Das:
>
>> -----Original Message-----
>> From: dri-devel <dri-devel-bounces@lists.freedesktop.org> On Behalf Of Biju Das
>> Sent: 12 March 2026 17:47
>> To: Tommaso Merciai <tommaso.merciai.xr@bp.renesas.com>; Thomas Zimmermann <tzimmermann@suse.de>
>> Cc: boris.brezillon@collabora.com; loic.molinari@collabora.com; willy@infradead.org;
>> frank.binns@imgtec.com; matt.coster@imgtec.com; maarten.lankhorst@linux.intel.com; mripard@kernel.org;
>> airlied@gmail.com; simona@ffwll.ch; linux-mm@kvack.org; dri-devel@lists.freedesktop.org
>> Subject: RE: [PATCH v4 5/6] drm/gem-shmem: Track folio accessed/dirty status in mmap
>>
>> Hi Tommaso,
>>
>>> -----Original Message-----
>>> From: Tommaso Merciai <tommaso.merciai.xr@bp.renesas.com>
>>> Sent: 12 March 2026 17:37
>>> Subject: Re: [PATCH v4 5/6] drm/gem-shmem: Track folio accessed/dirty
>>> status in mmap
>>>
>>> Hi Thomas,
>>> Thanks for your patch.
>>>
>>> I'm working on DSI support for RZ/G3E from this morning rebasing on
>>> top of next-20260311 I'm seeing that weston hang on my side:
>>>
>>> Reverting this patch fix the issue.
>>> (git revert 28e3918179aa)
>>>
>>> I'm wondering if anyone encountered this issue?
>>> Thanks in advance.
>>
>> I am also seeing same issue on RZ/G3L with weston.
> Just add I am using mesa with panfrost(Mali-G31) on RZ/G3L.
I ran the tests with bochs. I don't have panfrost hardware to test
with, but nothing in the driver sticks out as problematic. Only the gem
open/close logic looks a bit awkward. The mmap code appears to be identical.
>
> Disabling Mali-G31 renders weston desktop during boot.
>
> Looks like this patch is creating some hang in panfrost driver
> during weston launch.
Does either of you see any warnings in the kernel messages?
>
> Cheers,
[...]
> + if (ret == VM_FAULT_NOPAGE)
> + folio_mark_accessed(folio);
> +
[...]
[...]
> + if (drm_WARN_ON(obj->dev, !shmem->pages || page_offset >= num_pages))
> + return VM_FAULT_SIGBUS;
[...]
> + file_update_time(vma->vm_file);
[...]
> +
> + folio_mark_dirty(page_folio(shmem->pages[page_offset]));
Out-commenting either of these should work. Could you please go through
them one by one and test if out-commenting either makes a difference?
Best regards
Thomas
> +
> + return 0;
> +}
> +
> const struct vm_operations_struct drm_gem_shmem_vm_ops = {
> .fault = drm_gem_shmem_fault,
> .open = drm_gem_shmem_vm_open,
> .close = drm_gem_shmem_vm_close,
> + .pfn_mkwrite = drm_gem_shmem_pfn_mkwrite,
> };
> EXPORT_SYMBOL_GPL(drm_gem_shmem_vm_ops);
>
> --
> 2.52.0
>
>
--
--
Thomas Zimmermann
Graphics Driver Developer
SUSE Software Solutions Germany GmbH
Frankenstr. 146, 90461 Nürnberg, Germany, www.suse.com
GF: Jochen Jaser, Andrew McDonald, Werner Knoblich, (HRB 36809, AG Nürnberg)
^ permalink raw reply [flat|nested] 35+ messages in thread
* Re: [PATCH v4 5/6] drm/gem-shmem: Track folio accessed/dirty status in mmap
2026-03-12 17:46 ` Biju Das
2026-03-13 6:44 ` Biju Das
@ 2026-03-13 8:33 ` Thomas Zimmermann
2026-03-13 8:47 ` Biju Das
2026-03-13 9:24 ` Tommaso Merciai
1 sibling, 2 replies; 35+ messages in thread
From: Thomas Zimmermann @ 2026-03-13 8:33 UTC (permalink / raw)
To: Biju Das, Tommaso Merciai
Cc: boris.brezillon@collabora.com, loic.molinari@collabora.com,
willy@infradead.org, frank.binns@imgtec.com,
matt.coster@imgtec.com, maarten.lankhorst@linux.intel.com,
mripard@kernel.org, airlied@gmail.com, simona@ffwll.ch,
linux-mm@kvack.org, dri-devel@lists.freedesktop.org
[-- Attachment #1: Type: text/plain, Size: 4004 bytes --]
Hi
Am 12.03.26 um 18:46 schrieb Biju Das:
> Hi Tommaso,
>
>> -----Original Message-----
>> From: Tommaso Merciai <tommaso.merciai.xr@bp.renesas.com>
>> Sent: 12 March 2026 17:37
>> Subject: Re: [PATCH v4 5/6] drm/gem-shmem: Track folio accessed/dirty status in mmap
>>
>> Hi Thomas,
>> Thanks for your patch.
>>
>> I'm working on DSI support for RZ/G3E from this morning rebasing on top of next-20260311 I'm seeing
>> that weston hang on my side:
>>
>> Reverting this patch fix the issue.
>> (git revert 28e3918179aa)
Just a guess, but maybe vm_file is NULL. The attached patch should
handle this. Could either of you please test?
Best regards
Thomas
>>
>> I'm wondering if anyone encountered this issue?
>> Thanks in advance.
>
> I am also seeing same issue on RZ/G3L with weston.
>
> Cheers,
> Biju
>
>
>
>> Kind Regards,
>> Tommaso
>>
>> On Fri, Feb 27, 2026 at 12:42:10PM +0100, Thomas Zimmermann wrote:
>>> Invoke folio_mark_accessed() in mmap page faults to add the folio to
>>> the memory manager's LRU list. Userspace invokes mmap to get the
>>> memory for software rendering. Compositors do the same when creating
>>> the final on-screen image, so keeping the pages in LRU makes sense.
>>> Avoids paging out graphics buffers when under memory pressure.
>>>
>>> In pfn_mkwrite, further invoke the folio_mark_dirty() to add the folio
>>> for writeback should the underlying file be paged out from system memory.
>>> This rarely happens in practice, yet it would corrupt the buffer content.
>>>
>>> This has little effect on a system's hardware-accelerated rendering,
>>> which only mmaps for an initial setup of textures, meshes, shaders, etc.
>>>
>>> v4:
>>> - test for VM_FAULT_NOPAGE before marking folio as accessed (Boris)
>>> - test page-array bounds in mkwrite handler (Boris)
>>> v3:
>>> - rewrite for VM_PFNMAP
>>> v2:
>>> - adapt to changes in drm_gem_shmem_try_mmap_pmd()
>>>
>>> Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
>>> Reviewed-by: Boris Brezillon <boris.brezillon@collabora.com>
>>> ---
>>> drivers/gpu/drm/drm_gem_shmem_helper.c | 22 ++++++++++++++++++++++
>>> 1 file changed, 22 insertions(+)
>>>
>>> diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c
>>> b/drivers/gpu/drm/drm_gem_shmem_helper.c
>>> index cefa50eaf7a4..1ab2bbd3860c 100644
>>> --- a/drivers/gpu/drm/drm_gem_shmem_helper.c
>>> +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c
>>> @@ -598,6 +598,9 @@ static vm_fault_t drm_gem_shmem_fault(struct vm_fault *vmf)
>>> if (ret != VM_FAULT_NOPAGE)
>>> ret = vmf_insert_pfn(vma, vmf->address, pfn);
>>>
>>> + if (ret == VM_FAULT_NOPAGE)
>>> + folio_mark_accessed(folio);
>>> +
>>> out:
>>> dma_resv_unlock(obj->resv);
>>>
>>> @@ -638,10 +641,29 @@ static void drm_gem_shmem_vm_close(struct vm_area_struct *vma)
>>> drm_gem_vm_close(vma);
>>> }
>>>
>>> +static vm_fault_t drm_gem_shmem_pfn_mkwrite(struct vm_fault *vmf) {
>>> + struct vm_area_struct *vma = vmf->vma;
>>> + struct drm_gem_object *obj = vma->vm_private_data;
>>> + struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj);
>>> + loff_t num_pages = obj->size >> PAGE_SHIFT;
>>> + pgoff_t page_offset = vmf->pgoff - vma->vm_pgoff; /* page offset
>>> +within VMA */
>>> +
>>> + if (drm_WARN_ON(obj->dev, !shmem->pages || page_offset >= num_pages))
>>> + return VM_FAULT_SIGBUS;
>>> +
>>> + file_update_time(vma->vm_file);
>>> +
>>> + folio_mark_dirty(page_folio(shmem->pages[page_offset]));
>>> +
>>> + return 0;
>>> +}
>>> +
>>> const struct vm_operations_struct drm_gem_shmem_vm_ops = {
>>> .fault = drm_gem_shmem_fault,
>>> .open = drm_gem_shmem_vm_open,
>>> .close = drm_gem_shmem_vm_close,
>>> + .pfn_mkwrite = drm_gem_shmem_pfn_mkwrite,
>>> };
>>> EXPORT_SYMBOL_GPL(drm_gem_shmem_vm_ops);
>>>
>>> --
>>> 2.52.0
>>>
--
--
Thomas Zimmermann
Graphics Driver Developer
SUSE Software Solutions Germany GmbH
Frankenstr. 146, 90461 Nürnberg, Germany, www.suse.com
GF: Jochen Jaser, Andrew McDonald, Werner Knoblich, (HRB 36809, AG Nürnberg)
[-- Attachment #2: 0001-test-for-vm_file-before-updating-file-access-time.patch --]
[-- Type: text/x-patch, Size: 898 bytes --]
From e64346c6b3d946e4424066d888cb41fe60eb9d88 Mon Sep 17 00:00:00 2001
From: Thomas Zimmermann <tzimmermann@suse.de>
Date: Fri, 13 Mar 2026 09:29:26 +0100
Subject: [PATCH] test for vm_file before updating file access time
---
drivers/gpu/drm/drm_gem_shmem_helper.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c
index 4500deef4127..7fad72cc01dc 100644
--- a/drivers/gpu/drm/drm_gem_shmem_helper.c
+++ b/drivers/gpu/drm/drm_gem_shmem_helper.c
@@ -656,7 +656,8 @@ static vm_fault_t drm_gem_shmem_pfn_mkwrite(struct vm_fault *vmf)
if (drm_WARN_ON(obj->dev, !shmem->pages || page_offset >= num_pages))
return VM_FAULT_SIGBUS;
- file_update_time(vma->vm_file);
+ if (vma->vm_file)
+ file_update_time(vma->vm_file);
folio_mark_dirty(page_folio(shmem->pages[page_offset]));
--
2.53.0
^ permalink raw reply related [flat|nested] 35+ messages in thread
* RE: [PATCH v4 5/6] drm/gem-shmem: Track folio accessed/dirty status in mmap
2026-03-13 8:00 ` Thomas Zimmermann
@ 2026-03-13 8:41 ` Biju Das
2026-03-13 10:03 ` Biju Das
1 sibling, 0 replies; 35+ messages in thread
From: Biju Das @ 2026-03-13 8:41 UTC (permalink / raw)
To: Thomas Zimmermann, Tommaso Merciai
Cc: boris.brezillon@collabora.com, loic.molinari@collabora.com,
willy@infradead.org, frank.binns@imgtec.com,
matt.coster@imgtec.com, maarten.lankhorst@linux.intel.com,
mripard@kernel.org, airlied@gmail.com, simona@ffwll.ch,
linux-mm@kvack.org, dri-devel@lists.freedesktop.org
> -----Original Message-----
> From: Thomas Zimmermann <tzimmermann@suse.de>
> Sent: 13 March 2026 08:01
> Subject: Re: [PATCH v4 5/6] drm/gem-shmem: Track folio accessed/dirty status in mmap
>
> Hi
>
> Am 13.03.26 um 07:44 schrieb Biju Das:
> >
> >> -----Original Message-----
> >> From: dri-devel <dri-devel-bounces@lists.freedesktop.org> On Behalf
> >> Of Biju Das
> >> Sent: 12 March 2026 17:47
> >> To: Tommaso Merciai <tommaso.merciai.xr@bp.renesas.com>; Thomas
> >> Zimmermann <tzimmermann@suse.de>
> >> Cc: boris.brezillon@collabora.com; loic.molinari@collabora.com;
> >> willy@infradead.org; frank.binns@imgtec.com; matt.coster@imgtec.com;
> >> maarten.lankhorst@linux.intel.com; mripard@kernel.org;
> >> airlied@gmail.com; simona@ffwll.ch; linux-mm@kvack.org;
> >> dri-devel@lists.freedesktop.org
> >> Subject: RE: [PATCH v4 5/6] drm/gem-shmem: Track folio accessed/dirty
> >> status in mmap
> >>
> >> Hi Tommaso,
> >>
> >>> -----Original Message-----
> >>> From: Tommaso Merciai <tommaso.merciai.xr@bp.renesas.com>
> >>> Sent: 12 March 2026 17:37
> >>> Subject: Re: [PATCH v4 5/6] drm/gem-shmem: Track folio
> >>> accessed/dirty status in mmap
> >>>
> >>> Hi Thomas,
> >>> Thanks for your patch.
> >>>
> >>> I'm working on DSI support for RZ/G3E from this morning rebasing on
> >>> top of next-20260311 I'm seeing that weston hang on my side:
> >>>
> >>> Reverting this patch fix the issue.
> >>> (git revert 28e3918179aa)
> >>>
> >>> I'm wondering if anyone encountered this issue?
> >>> Thanks in advance.
> >>
> >> I am also seeing same issue on RZ/G3L with weston.
> > Just add I am using mesa with panfrost(Mali-G31) on RZ/G3L.
>
> I ran the tests with bochs. I don't have panfrost hardware to test with, but nothing in the driver
> sticks out as problematic. Only the gem open/close logic looks a bit awkward. The mmap code appears to
> be identical.
>
> >
> > Disabling Mali-G31 renders weston desktop during boot.
> >
> > Looks like this patch is creating some hang in panfrost driver
> > during weston launch.
>
> Does either of you see any warnings in the kernel messages?
There is no kernel messages
[ OK ] Finished Virtual Console Setup.
[ OK ] Started User Manager for UID 1000.
[ OK ] Started Session c1 of User weston.
Cheers,
Biju
^ permalink raw reply [flat|nested] 35+ messages in thread
* RE: [PATCH v4 5/6] drm/gem-shmem: Track folio accessed/dirty status in mmap
2026-03-13 8:33 ` Thomas Zimmermann
@ 2026-03-13 8:47 ` Biju Das
2026-03-13 9:24 ` Tommaso Merciai
1 sibling, 0 replies; 35+ messages in thread
From: Biju Das @ 2026-03-13 8:47 UTC (permalink / raw)
To: Thomas Zimmermann, Tommaso Merciai
Cc: boris.brezillon@collabora.com, loic.molinari@collabora.com,
willy@infradead.org, frank.binns@imgtec.com,
matt.coster@imgtec.com, maarten.lankhorst@linux.intel.com,
mripard@kernel.org, airlied@gmail.com, simona@ffwll.ch,
linux-mm@kvack.org, dri-devel@lists.freedesktop.org
> -----Original Message-----
> From: Thomas Zimmermann <tzimmermann@suse.de>
> Sent: 13 March 2026 08:34
> Subject: Re: [PATCH v4 5/6] drm/gem-shmem: Track folio accessed/dirty status in mmap
>
> Hi
>
> Am 12.03.26 um 18:46 schrieb Biju Das:
> > Hi Tommaso,
> >
> >> -----Original Message-----
> >> From: Tommaso Merciai <tommaso.merciai.xr@bp.renesas.com>
> >> Sent: 12 March 2026 17:37
> >> Subject: Re: [PATCH v4 5/6] drm/gem-shmem: Track folio accessed/dirty
> >> status in mmap
> >>
> >> Hi Thomas,
> >> Thanks for your patch.
> >>
> >> I'm working on DSI support for RZ/G3E from this morning rebasing on
> >> top of next-20260311 I'm seeing that weston hang on my side:
> >>
> >> Reverting this patch fix the issue.
> >> (git revert 28e3918179aa)
>
> Just a guess, but maybe vm_file is NULL. The attached patch should handle this. Could either of you
> please test?
No Luck. I will try to enable some print in panfrost driver.
oot@smarc-rzv2l:~# systemctl status weston
* weston.service - Weston, a Wayland compositor, as a system service
Loaded: loaded (/usr/lib/systemd/system/weston.service; enabled; preset: enabled)
Active: active (running) since Sat 2000-01-01 00:00:06 UTC; 56s ago
TriggeredBy: * weston.socket
Docs: man:weston(1)
man:weston.ini(5)
http://wayland.freedesktop.org/
Main PID: 238 (weston)
Tasks: 0 (limit: 1796)
Memory: 2.4M (peak: 2.9M)
CPU: 140ms
CGroup: /system.slice/weston.service
> 238 /usr/bin/weston --modules=systemd-notify.so
Jan 01 00:00:01 smarc-rzv2l systemd[1]: Starting Weston, a Wayland compositor, as a system service...
Jan 01 00:00:06 smarc-rzv2l systemd[1]: Started Weston, a Wayland compositor, as a system service.
root@smarc-rzv2l:~#
^ permalink raw reply [flat|nested] 35+ messages in thread
* Re: [PATCH v4 5/6] drm/gem-shmem: Track folio accessed/dirty status in mmap
2026-03-13 8:33 ` Thomas Zimmermann
2026-03-13 8:47 ` Biju Das
@ 2026-03-13 9:24 ` Tommaso Merciai
1 sibling, 0 replies; 35+ messages in thread
From: Tommaso Merciai @ 2026-03-13 9:24 UTC (permalink / raw)
To: Thomas Zimmermann, Biju Das
Cc: boris.brezillon@collabora.com, loic.molinari@collabora.com,
willy@infradead.org, frank.binns@imgtec.com,
matt.coster@imgtec.com, maarten.lankhorst@linux.intel.com,
mripard@kernel.org, airlied@gmail.com, simona@ffwll.ch,
linux-mm@kvack.org, dri-devel@lists.freedesktop.org
Hi Thomas,
On 3/13/26 09:33, Thomas Zimmermann wrote:
> Hi
>
> Am 12.03.26 um 18:46 schrieb Biju Das:
>> Hi Tommaso,
>>
>>> -----Original Message-----
>>> From: Tommaso Merciai <tommaso.merciai.xr@bp.renesas.com>
>>> Sent: 12 March 2026 17:37
>>> Subject: Re: [PATCH v4 5/6] drm/gem-shmem: Track folio accessed/dirty
>>> status in mmap
>>>
>>> Hi Thomas,
>>> Thanks for your patch.
>>>
>>> I'm working on DSI support for RZ/G3E from this morning rebasing on
>>> top of next-20260311 I'm seeing
>>> that weston hang on my side:
>>>
>>> Reverting this patch fix the issue.
>>> (git revert 28e3918179aa)
>
> Just a guess, but maybe vm_file is NULL. The attached patch should
> handle this. Could either of you please test?
Same here, no luck.
Thanks for sharing.
Kind Regards,
Tommaso
>
> Best regards
> Thomas
>
>
>>>
>>> I'm wondering if anyone encountered this issue?
>>> Thanks in advance.
>>
>> I am also seeing same issue on RZ/G3L with weston.
>>
>> Cheers,
>> Biju
>>
>>
>>
>>> Kind Regards,
>>> Tommaso
>>>
>>> On Fri, Feb 27, 2026 at 12:42:10PM +0100, Thomas Zimmermann wrote:
>>>> Invoke folio_mark_accessed() in mmap page faults to add the folio to
>>>> the memory manager's LRU list. Userspace invokes mmap to get the
>>>> memory for software rendering. Compositors do the same when creating
>>>> the final on-screen image, so keeping the pages in LRU makes sense.
>>>> Avoids paging out graphics buffers when under memory pressure.
>>>>
>>>> In pfn_mkwrite, further invoke the folio_mark_dirty() to add the folio
>>>> for writeback should the underlying file be paged out from system
>>>> memory.
>>>> This rarely happens in practice, yet it would corrupt the buffer
>>>> content.
>>>>
>>>> This has little effect on a system's hardware-accelerated rendering,
>>>> which only mmaps for an initial setup of textures, meshes, shaders,
>>>> etc.
>>>>
>>>> v4:
>>>> - test for VM_FAULT_NOPAGE before marking folio as accessed (Boris)
>>>> - test page-array bounds in mkwrite handler (Boris)
>>>> v3:
>>>> - rewrite for VM_PFNMAP
>>>> v2:
>>>> - adapt to changes in drm_gem_shmem_try_mmap_pmd()
>>>>
>>>> Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
>>>> Reviewed-by: Boris Brezillon <boris.brezillon@collabora.com>
>>>> ---
>>>> drivers/gpu/drm/drm_gem_shmem_helper.c | 22 ++++++++++++++++++++++
>>>> 1 file changed, 22 insertions(+)
>>>>
>>>> diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c
>>>> b/drivers/gpu/drm/drm_gem_shmem_helper.c
>>>> index cefa50eaf7a4..1ab2bbd3860c 100644
>>>> --- a/drivers/gpu/drm/drm_gem_shmem_helper.c
>>>> +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c
>>>> @@ -598,6 +598,9 @@ static vm_fault_t drm_gem_shmem_fault(struct
>>>> vm_fault *vmf)
>>>> if (ret != VM_FAULT_NOPAGE)
>>>> ret = vmf_insert_pfn(vma, vmf->address, pfn);
>>>>
>>>> + if (ret == VM_FAULT_NOPAGE)
>>>> + folio_mark_accessed(folio);
>>>> +
>>>> out:
>>>> dma_resv_unlock(obj->resv);
>>>>
>>>> @@ -638,10 +641,29 @@ static void drm_gem_shmem_vm_close(struct
>>>> vm_area_struct *vma)
>>>> drm_gem_vm_close(vma);
>>>> }
>>>>
>>>> +static vm_fault_t drm_gem_shmem_pfn_mkwrite(struct vm_fault *vmf) {
>>>> + struct vm_area_struct *vma = vmf->vma;
>>>> + struct drm_gem_object *obj = vma->vm_private_data;
>>>> + struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj);
>>>> + loff_t num_pages = obj->size >> PAGE_SHIFT;
>>>> + pgoff_t page_offset = vmf->pgoff - vma->vm_pgoff; /* page offset
>>>> +within VMA */
>>>> +
>>>> + if (drm_WARN_ON(obj->dev, !shmem->pages || page_offset >=
>>>> num_pages))
>>>> + return VM_FAULT_SIGBUS;
>>>> +
>>>> + file_update_time(vma->vm_file);
>>>> +
>>>> + folio_mark_dirty(page_folio(shmem->pages[page_offset]));
>>>> +
>>>> + return 0;
>>>> +}
>>>> +
>>>> const struct vm_operations_struct drm_gem_shmem_vm_ops = {
>>>> .fault = drm_gem_shmem_fault,
>>>> .open = drm_gem_shmem_vm_open,
>>>> .close = drm_gem_shmem_vm_close,
>>>> + .pfn_mkwrite = drm_gem_shmem_pfn_mkwrite,
>>>> };
>>>> EXPORT_SYMBOL_GPL(drm_gem_shmem_vm_ops);
>>>>
>>>> --
>>>> 2.52.0
>>>>
>
^ permalink raw reply [flat|nested] 35+ messages in thread
* RE: [PATCH v4 5/6] drm/gem-shmem: Track folio accessed/dirty status in mmap
2026-03-13 8:00 ` Thomas Zimmermann
2026-03-13 8:41 ` Biju Das
@ 2026-03-13 10:03 ` Biju Das
1 sibling, 0 replies; 35+ messages in thread
From: Biju Das @ 2026-03-13 10:03 UTC (permalink / raw)
To: Thomas Zimmermann, Tommaso Merciai
Cc: boris.brezillon@collabora.com, loic.molinari@collabora.com,
willy@infradead.org, frank.binns@imgtec.com,
matt.coster@imgtec.com, maarten.lankhorst@linux.intel.com,
mripard@kernel.org, airlied@gmail.com, simona@ffwll.ch,
linux-mm@kvack.org, dri-devel@lists.freedesktop.org
Hi Thomas,
> -----Original Message-----
> From: Thomas Zimmermann <tzimmermann@suse.de>
> Sent: 13 March 2026 08:01
> Subject: Re: [PATCH v4 5/6] drm/gem-shmem: Track folio accessed/dirty status in mmap
>
> > const struct vm_operations_struct drm_gem_shmem_vm_ops = {
> > .fault = drm_gem_shmem_fault,
> > .open = drm_gem_shmem_vm_open,
> > .close = drm_gem_shmem_vm_close,
> > + .pfn_mkwrite = drm_gem_shmem_pfn_mkwrite,
Commenting this makes weston to work. Kmscube works without this change.
Cheers,
Biju
^ permalink raw reply [flat|nested] 35+ messages in thread
* Re: [PATCH v4 5/6] drm/gem-shmem: Track folio accessed/dirty status in mmap
2026-03-13 6:44 ` Biju Das
2026-03-13 8:00 ` Thomas Zimmermann
@ 2026-03-13 10:18 ` Boris Brezillon
2026-03-13 10:29 ` Thomas Zimmermann
1 sibling, 1 reply; 35+ messages in thread
From: Boris Brezillon @ 2026-03-13 10:18 UTC (permalink / raw)
To: Biju Das
Cc: Tommaso Merciai, Thomas Zimmermann, loic.molinari@collabora.com,
willy@infradead.org, frank.binns@imgtec.com,
matt.coster@imgtec.com, maarten.lankhorst@linux.intel.com,
mripard@kernel.org, airlied@gmail.com, simona@ffwll.ch,
linux-mm@kvack.org, dri-devel@lists.freedesktop.org
On Fri, 13 Mar 2026 06:44:25 +0000
Biju Das <biju.das.jz@bp.renesas.com> wrote:
> > -----Original Message-----
> > From: dri-devel <dri-devel-bounces@lists.freedesktop.org> On Behalf Of Biju Das
> > Sent: 12 March 2026 17:47
> > To: Tommaso Merciai <tommaso.merciai.xr@bp.renesas.com>; Thomas Zimmermann <tzimmermann@suse.de>
> > Cc: boris.brezillon@collabora.com; loic.molinari@collabora.com; willy@infradead.org;
> > frank.binns@imgtec.com; matt.coster@imgtec.com; maarten.lankhorst@linux.intel.com; mripard@kernel.org;
> > airlied@gmail.com; simona@ffwll.ch; linux-mm@kvack.org; dri-devel@lists.freedesktop.org
> > Subject: RE: [PATCH v4 5/6] drm/gem-shmem: Track folio accessed/dirty status in mmap
> >
> > Hi Tommaso,
> >
> > > -----Original Message-----
> > > From: Tommaso Merciai <tommaso.merciai.xr@bp.renesas.com>
> > > Sent: 12 March 2026 17:37
> > > Subject: Re: [PATCH v4 5/6] drm/gem-shmem: Track folio accessed/dirty
> > > status in mmap
> > >
> > > Hi Thomas,
> > > Thanks for your patch.
> > >
> > > I'm working on DSI support for RZ/G3E from this morning rebasing on
> > > top of next-20260311 I'm seeing that weston hang on my side:
> > >
> > > Reverting this patch fix the issue.
> > > (git revert 28e3918179aa)
> > >
> > > I'm wondering if anyone encountered this issue?
> > > Thanks in advance.
> >
> >
> > I am also seeing same issue on RZ/G3L with weston.
>
> Just add I am using mesa with panfrost(Mali-G31) on RZ/G3L.
>
> Disabling Mali-G31 renders weston desktop during boot.
>
> Looks like this patch is creating some hang in panfrost driver
> during weston launch.
>
> Cheers,
> Biju
>
>
>
> >
> > Cheers,
> > Biju
> >
> >
> >
> > >
> > > Kind Regards,
> > > Tommaso
> > >
> > > On Fri, Feb 27, 2026 at 12:42:10PM +0100, Thomas Zimmermann wrote:
> > > > Invoke folio_mark_accessed() in mmap page faults to add the folio to
> > > > the memory manager's LRU list. Userspace invokes mmap to get the
> > > > memory for software rendering. Compositors do the same when creating
> > > > the final on-screen image, so keeping the pages in LRU makes sense.
> > > > Avoids paging out graphics buffers when under memory pressure.
> > > >
> > > > In pfn_mkwrite, further invoke the folio_mark_dirty() to add the
> > > > folio for writeback should the underlying file be paged out from system memory.
> > > > This rarely happens in practice, yet it would corrupt the buffer content.
> > > >
> > > > This has little effect on a system's hardware-accelerated rendering,
> > > > which only mmaps for an initial setup of textures, meshes, shaders, etc.
> > > >
> > > > v4:
> > > > - test for VM_FAULT_NOPAGE before marking folio as accessed (Boris)
> > > > - test page-array bounds in mkwrite handler (Boris)
> > > > v3:
> > > > - rewrite for VM_PFNMAP
> > > > v2:
> > > > - adapt to changes in drm_gem_shmem_try_mmap_pmd()
> > > >
> > > > Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
> > > > Reviewed-by: Boris Brezillon <boris.brezillon@collabora.com>
> > > > ---
> > > > drivers/gpu/drm/drm_gem_shmem_helper.c | 22 ++++++++++++++++++++++
> > > > 1 file changed, 22 insertions(+)
> > > >
> > > > diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c
> > > > b/drivers/gpu/drm/drm_gem_shmem_helper.c
> > > > index cefa50eaf7a4..1ab2bbd3860c 100644
> > > > --- a/drivers/gpu/drm/drm_gem_shmem_helper.c
> > > > +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c
> > > > @@ -598,6 +598,9 @@ static vm_fault_t drm_gem_shmem_fault(struct vm_fault *vmf)
> > > > if (ret != VM_FAULT_NOPAGE)
> > > > ret = vmf_insert_pfn(vma, vmf->address, pfn);
> > > >
> > > > + if (ret == VM_FAULT_NOPAGE)
> > > > + folio_mark_accessed(folio);
> > > > +
> > > > out:
> > > > dma_resv_unlock(obj->resv);
> > > >
> > > > @@ -638,10 +641,29 @@ static void drm_gem_shmem_vm_close(struct vm_area_struct *vma)
> > > > drm_gem_vm_close(vma);
> > > > }
> > > >
> > > > +static vm_fault_t drm_gem_shmem_pfn_mkwrite(struct vm_fault *vmf) {
> > > > + struct vm_area_struct *vma = vmf->vma;
> > > > + struct drm_gem_object *obj = vma->vm_private_data;
> > > > + struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj);
> > > > + loff_t num_pages = obj->size >> PAGE_SHIFT;
> > > > + pgoff_t page_offset = vmf->pgoff - vma->vm_pgoff; /* page offset
> > > > +within VMA */
> > > > +
> > > > + if (drm_WARN_ON(obj->dev, !shmem->pages || page_offset >= num_pages))
> > > > + return VM_FAULT_SIGBUS;
> > > > +
> > > > + file_update_time(vma->vm_file);
> > > > +
> > > > + folio_mark_dirty(page_folio(shmem->pages[page_offset]));
Do we need a folio_mark_dirty_lock() here?
> > > > +
> > > > + return 0;
> > > > +}
> > > > +
> > > > const struct vm_operations_struct drm_gem_shmem_vm_ops = {
> > > > .fault = drm_gem_shmem_fault,
> > > > .open = drm_gem_shmem_vm_open,
> > > > .close = drm_gem_shmem_vm_close,
> > > > + .pfn_mkwrite = drm_gem_shmem_pfn_mkwrite,
> > > > };
> > > > EXPORT_SYMBOL_GPL(drm_gem_shmem_vm_ops);
> > > >
> > > > --
> > > > 2.52.0
> > > >
^ permalink raw reply [flat|nested] 35+ messages in thread
* Re: [PATCH v4 5/6] drm/gem-shmem: Track folio accessed/dirty status in mmap
2026-03-13 10:18 ` Boris Brezillon
@ 2026-03-13 10:29 ` Thomas Zimmermann
2026-03-13 10:33 ` Biju Das
` (2 more replies)
0 siblings, 3 replies; 35+ messages in thread
From: Thomas Zimmermann @ 2026-03-13 10:29 UTC (permalink / raw)
To: Boris Brezillon, Biju Das
Cc: Tommaso Merciai, loic.molinari@collabora.com, willy@infradead.org,
frank.binns@imgtec.com, matt.coster@imgtec.com,
maarten.lankhorst@linux.intel.com, mripard@kernel.org,
airlied@gmail.com, simona@ffwll.ch, linux-mm@kvack.org,
dri-devel@lists.freedesktop.org
Hi
Am 13.03.26 um 11:18 schrieb Boris Brezillon:
[...]
>>>>> + if (drm_WARN_ON(obj->dev, !shmem->pages || page_offset >= num_pages))
>>>>> + return VM_FAULT_SIGBUS;
>>>>> +
>>>>> + file_update_time(vma->vm_file);
>>>>> +
>>>>> + folio_mark_dirty(page_folio(shmem->pages[page_offset]));
> Do we need a folio_mark_dirty_lock() here?
There is a helper for that with some documentation. [1]
[1]
https://elixir.bootlin.com/linux/v6.19.7/source/mm/page-writeback.c#L2826
Best regards
Thomas
>
>>>>> +
>>>>> + return 0;
>>>>> +}
>>>>> +
>>>>> const struct vm_operations_struct drm_gem_shmem_vm_ops = {
>>>>> .fault = drm_gem_shmem_fault,
>>>>> .open = drm_gem_shmem_vm_open,
>>>>> .close = drm_gem_shmem_vm_close,
>>>>> + .pfn_mkwrite = drm_gem_shmem_pfn_mkwrite,
>>>>> };
>>>>> EXPORT_SYMBOL_GPL(drm_gem_shmem_vm_ops);
>>>>>
>>>>> --
>>>>> 2.52.0
>>>>>
--
--
Thomas Zimmermann
Graphics Driver Developer
SUSE Software Solutions Germany GmbH
Frankenstr. 146, 90461 Nürnberg, Germany, www.suse.com
GF: Jochen Jaser, Andrew McDonald, Werner Knoblich, (HRB 36809, AG Nürnberg)
^ permalink raw reply [flat|nested] 35+ messages in thread
* RE: [PATCH v4 5/6] drm/gem-shmem: Track folio accessed/dirty status in mmap
2026-03-13 10:29 ` Thomas Zimmermann
@ 2026-03-13 10:33 ` Biju Das
2026-03-13 10:52 ` Boris Brezillon
2026-03-13 11:56 ` Boris Brezillon
2 siblings, 0 replies; 35+ messages in thread
From: Biju Das @ 2026-03-13 10:33 UTC (permalink / raw)
To: Thomas Zimmermann, Boris Brezillon
Cc: Tommaso Merciai, loic.molinari@collabora.com, willy@infradead.org,
frank.binns@imgtec.com, matt.coster@imgtec.com,
maarten.lankhorst@linux.intel.com, mripard@kernel.org,
airlied@gmail.com, simona@ffwll.ch, linux-mm@kvack.org,
dri-devel@lists.freedesktop.org
Hi Thomas,
> -----Original Message-----
> From: Thomas Zimmermann <tzimmermann@suse.de>
> Sent: 13 March 2026 10:30
> Subject: Re: [PATCH v4 5/6] drm/gem-shmem: Track folio accessed/dirty status in mmap
>
> Hi
>
> Am 13.03.26 um 11:18 schrieb Boris Brezillon:
> [...]
> >>>>> + if (drm_WARN_ON(obj->dev, !shmem->pages || page_offset >= num_pages))
> >>>>> + return VM_FAULT_SIGBUS;
> >>>>> +
> >>>>> + file_update_time(vma->vm_file);
> >>>>> +
> >>>>> + folio_mark_dirty(page_folio(shmem->pages[page_offset]));
> > Do we need a folio_mark_dirty_lock() here?
>
> There is a helper for that with some documentation. [1]
>
> [1]
> https://elixir.bootlin.com/linux/v6.19.7/source/mm/page-writeback.c#L2826
FYI, I commented all the calls in the patch, it does not solve the issue.
Commenting the callback below fixed the issue.
//.pfn_mkwrite = drm_gem_shmem_pfn_mkwrite,
Cheers,
Biju
> >
> >>>>> +
> >>>>> + return 0;
> >>>>> +}
> >>>>> +
> >>>>> const struct vm_operations_struct drm_gem_shmem_vm_ops = {
> >>>>> .fault = drm_gem_shmem_fault,
> >>>>> .open = drm_gem_shmem_vm_open,
> >>>>> .close = drm_gem_shmem_vm_close,
> >>>>> + .pfn_mkwrite = drm_gem_shmem_pfn_mkwrite,
> >>>>> };
> >>>>> EXPORT_SYMBOL_GPL(drm_gem_shmem_vm_ops);
> >>>>>
> >>>>> --
> >>>>> 2.52.0
> >>>>>
>
> --
> --
> Thomas Zimmermann
> Graphics Driver Developer
> SUSE Software Solutions Germany GmbH
> Frankenstr. 146, 90461 Nürnberg, Germany, www.suse.com
> GF: Jochen Jaser, Andrew McDonald, Werner Knoblich, (HRB 36809, AG Nürnberg)
>
^ permalink raw reply [flat|nested] 35+ messages in thread
* Re: [PATCH v4 5/6] drm/gem-shmem: Track folio accessed/dirty status in mmap
2026-03-13 10:29 ` Thomas Zimmermann
2026-03-13 10:33 ` Biju Das
@ 2026-03-13 10:52 ` Boris Brezillon
2026-03-13 11:56 ` Boris Brezillon
2 siblings, 0 replies; 35+ messages in thread
From: Boris Brezillon @ 2026-03-13 10:52 UTC (permalink / raw)
To: Thomas Zimmermann
Cc: Biju Das, Tommaso Merciai, loic.molinari@collabora.com,
willy@infradead.org, frank.binns@imgtec.com,
matt.coster@imgtec.com, maarten.lankhorst@linux.intel.com,
mripard@kernel.org, airlied@gmail.com, simona@ffwll.ch,
linux-mm@kvack.org, dri-devel@lists.freedesktop.org
On Fri, 13 Mar 2026 11:29:47 +0100
Thomas Zimmermann <tzimmermann@suse.de> wrote:
> Hi
>
> Am 13.03.26 um 11:18 schrieb Boris Brezillon:
> [...]
> >>>>> + if (drm_WARN_ON(obj->dev, !shmem->pages || page_offset >= num_pages))
> >>>>> + return VM_FAULT_SIGBUS;
> >>>>> +
> >>>>> + file_update_time(vma->vm_file);
> >>>>> +
> >>>>> + folio_mark_dirty(page_folio(shmem->pages[page_offset]));
> > Do we need a folio_mark_dirty_lock() here?
>
> There is a helper for that with some documentation. [1]
I know, I meant using that version instead of folio_mark_dirty().
^ permalink raw reply [flat|nested] 35+ messages in thread
* Re: [PATCH v4 5/6] drm/gem-shmem: Track folio accessed/dirty status in mmap
2026-03-13 10:29 ` Thomas Zimmermann
2026-03-13 10:33 ` Biju Das
2026-03-13 10:52 ` Boris Brezillon
@ 2026-03-13 11:56 ` Boris Brezillon
2026-03-13 12:04 ` Biju Das
2 siblings, 1 reply; 35+ messages in thread
From: Boris Brezillon @ 2026-03-13 11:56 UTC (permalink / raw)
To: Thomas Zimmermann
Cc: Biju Das, Tommaso Merciai, loic.molinari@collabora.com,
willy@infradead.org, frank.binns@imgtec.com,
matt.coster@imgtec.com, maarten.lankhorst@linux.intel.com,
mripard@kernel.org, airlied@gmail.com, simona@ffwll.ch,
linux-mm@kvack.org, dri-devel@lists.freedesktop.org
On Fri, 13 Mar 2026 11:29:47 +0100
Thomas Zimmermann <tzimmermann@suse.de> wrote:
> Hi
>
> Am 13.03.26 um 11:18 schrieb Boris Brezillon:
> [...]
> >>>>> + if (drm_WARN_ON(obj->dev, !shmem->pages || page_offset >= num_pages))
> >>>>> + return VM_FAULT_SIGBUS;
> >>>>> +
> >>>>> + file_update_time(vma->vm_file);
> >>>>> +
> >>>>> + folio_mark_dirty(page_folio(shmem->pages[page_offset]));
> > Do we need a folio_mark_dirty_lock() here?
>
> There is a helper for that with some documentation. [1]
This [1] seems to solve the problem for me. Still unsure about the
folio_mark_dirty_lock vs folio_mark_dirty though.
[1]https://yhbt.net/lore/dri-devel/20260312155027.1682606-1-pedrodemargomes@gmail.com/
^ permalink raw reply [flat|nested] 35+ messages in thread
* RE: [PATCH v4 5/6] drm/gem-shmem: Track folio accessed/dirty status in mmap
2026-03-13 11:56 ` Boris Brezillon
@ 2026-03-13 12:04 ` Biju Das
2026-03-13 12:18 ` Boris Brezillon
0 siblings, 1 reply; 35+ messages in thread
From: Biju Das @ 2026-03-13 12:04 UTC (permalink / raw)
To: Boris Brezillon, Thomas Zimmermann
Cc: Tommaso Merciai, loic.molinari@collabora.com, willy@infradead.org,
frank.binns@imgtec.com, matt.coster@imgtec.com,
maarten.lankhorst@linux.intel.com, mripard@kernel.org,
airlied@gmail.com, simona@ffwll.ch, linux-mm@kvack.org,
dri-devel@lists.freedesktop.org
> -----Original Message-----
> From: dri-devel <dri-devel-bounces@lists.freedesktop.org> On Behalf Of Boris Brezillon
> Sent: 13 March 2026 11:57
> Subject: Re: [PATCH v4 5/6] drm/gem-shmem: Track folio accessed/dirty status in mmap
>
> On Fri, 13 Mar 2026 11:29:47 +0100
> Thomas Zimmermann <tzimmermann@suse.de> wrote:
>
> > Hi
> >
> > Am 13.03.26 um 11:18 schrieb Boris Brezillon:
> > [...]
> > >>>>> + if (drm_WARN_ON(obj->dev, !shmem->pages || page_offset >= num_pages))
> > >>>>> + return VM_FAULT_SIGBUS;
> > >>>>> +
> > >>>>> + file_update_time(vma->vm_file);
> > >>>>> +
> > >>>>> + folio_mark_dirty(page_folio(shmem->pages[page_offset]));
> > > Do we need a folio_mark_dirty_lock() here?
> >
> > There is a helper for that with some documentation. [1]
>
> This [1] seems to solve the problem for me. Still unsure about the folio_mark_dirty_lock vs
> folio_mark_dirty though.
>
> [1]https://yhbt.net/lore/dri-devel/20260312155027.1682606-1-pedrodemargomes@gmail.com/
FYI, I used folio_mark_dirty_lock() still it does not solve the issue with weston hang.
Cheers,
Biju
^ permalink raw reply [flat|nested] 35+ messages in thread
* Re: [PATCH v4 5/6] drm/gem-shmem: Track folio accessed/dirty status in mmap
2026-03-13 12:04 ` Biju Das
@ 2026-03-13 12:18 ` Boris Brezillon
2026-03-13 12:43 ` Boris Brezillon
0 siblings, 1 reply; 35+ messages in thread
From: Boris Brezillon @ 2026-03-13 12:18 UTC (permalink / raw)
To: Biju Das
Cc: Thomas Zimmermann, Tommaso Merciai, loic.molinari@collabora.com,
willy@infradead.org, frank.binns@imgtec.com,
matt.coster@imgtec.com, maarten.lankhorst@linux.intel.com,
mripard@kernel.org, airlied@gmail.com, simona@ffwll.ch,
linux-mm@kvack.org, dri-devel@lists.freedesktop.org
On Fri, 13 Mar 2026 12:04:25 +0000
Biju Das <biju.das.jz@bp.renesas.com> wrote:
> > -----Original Message-----
> > From: dri-devel <dri-devel-bounces@lists.freedesktop.org> On Behalf Of Boris Brezillon
> > Sent: 13 March 2026 11:57
> > Subject: Re: [PATCH v4 5/6] drm/gem-shmem: Track folio accessed/dirty status in mmap
> >
> > On Fri, 13 Mar 2026 11:29:47 +0100
> > Thomas Zimmermann <tzimmermann@suse.de> wrote:
> >
> > > Hi
> > >
> > > Am 13.03.26 um 11:18 schrieb Boris Brezillon:
> > > [...]
> > > >>>>> + if (drm_WARN_ON(obj->dev, !shmem->pages || page_offset >= num_pages))
> > > >>>>> + return VM_FAULT_SIGBUS;
> > > >>>>> +
> > > >>>>> + file_update_time(vma->vm_file);
> > > >>>>> +
> > > >>>>> + folio_mark_dirty(page_folio(shmem->pages[page_offset]));
> > > > Do we need a folio_mark_dirty_lock() here?
> > >
> > > There is a helper for that with some documentation. [1]
> >
> > This [1] seems to solve the problem for me. Still unsure about the folio_mark_dirty_lock vs
> > folio_mark_dirty though.
> >
> > [1]https://yhbt.net/lore/dri-devel/20260312155027.1682606-1-pedrodemargomes@gmail.com/
>
> FYI, I used folio_mark_dirty_lock() still it does not solve the issue with weston hang.
The patch I pointed to has nothing to do with folio_mark_dirty_lock(),
It's a bug caused by huge page mapping changes.
^ permalink raw reply [flat|nested] 35+ messages in thread
* Re: [PATCH v4 5/6] drm/gem-shmem: Track folio accessed/dirty status in mmap
2026-03-13 12:18 ` Boris Brezillon
@ 2026-03-13 12:43 ` Boris Brezillon
2026-03-13 12:55 ` Boris Brezillon
0 siblings, 1 reply; 35+ messages in thread
From: Boris Brezillon @ 2026-03-13 12:43 UTC (permalink / raw)
To: Biju Das
Cc: Thomas Zimmermann, Tommaso Merciai, loic.molinari@collabora.com,
willy@infradead.org, frank.binns@imgtec.com,
matt.coster@imgtec.com, maarten.lankhorst@linux.intel.com,
mripard@kernel.org, airlied@gmail.com, simona@ffwll.ch,
linux-mm@kvack.org, dri-devel@lists.freedesktop.org
On Fri, 13 Mar 2026 13:18:35 +0100
Boris Brezillon <boris.brezillon@collabora.com> wrote:
> On Fri, 13 Mar 2026 12:04:25 +0000
> Biju Das <biju.das.jz@bp.renesas.com> wrote:
>
> > > -----Original Message-----
> > > From: dri-devel <dri-devel-bounces@lists.freedesktop.org> On Behalf Of Boris Brezillon
> > > Sent: 13 March 2026 11:57
> > > Subject: Re: [PATCH v4 5/6] drm/gem-shmem: Track folio accessed/dirty status in mmap
> > >
> > > On Fri, 13 Mar 2026 11:29:47 +0100
> > > Thomas Zimmermann <tzimmermann@suse.de> wrote:
> > >
> > > > Hi
> > > >
> > > > Am 13.03.26 um 11:18 schrieb Boris Brezillon:
> > > > [...]
> > > > >>>>> + if (drm_WARN_ON(obj->dev, !shmem->pages || page_offset >= num_pages))
> > > > >>>>> + return VM_FAULT_SIGBUS;
> > > > >>>>> +
> > > > >>>>> + file_update_time(vma->vm_file);
> > > > >>>>> +
> > > > >>>>> + folio_mark_dirty(page_folio(shmem->pages[page_offset]));
> > > > > Do we need a folio_mark_dirty_lock() here?
> > > >
> > > > There is a helper for that with some documentation. [1]
> > >
> > > This [1] seems to solve the problem for me. Still unsure about the folio_mark_dirty_lock vs
> > > folio_mark_dirty though.
> > >
> > > [1]https://yhbt.net/lore/dri-devel/20260312155027.1682606-1-pedrodemargomes@gmail.com/
> >
> > FYI, I used folio_mark_dirty_lock() still it does not solve the issue with weston hang.
>
> The patch I pointed to has nothing to do with folio_mark_dirty_lock(),
> It's a bug caused by huge page mapping changes.
Scratch that. I had a bunch of other changes on top, and it hangs again
now that I dropped those.
^ permalink raw reply [flat|nested] 35+ messages in thread
* Re: [PATCH v4 5/6] drm/gem-shmem: Track folio accessed/dirty status in mmap
2026-03-13 12:43 ` Boris Brezillon
@ 2026-03-13 12:55 ` Boris Brezillon
2026-03-13 17:45 ` Boris Brezillon
0 siblings, 1 reply; 35+ messages in thread
From: Boris Brezillon @ 2026-03-13 12:55 UTC (permalink / raw)
To: Biju Das
Cc: Thomas Zimmermann, Tommaso Merciai, loic.molinari@collabora.com,
willy@infradead.org, frank.binns@imgtec.com,
matt.coster@imgtec.com, maarten.lankhorst@linux.intel.com,
mripard@kernel.org, airlied@gmail.com, simona@ffwll.ch,
linux-mm@kvack.org, dri-devel@lists.freedesktop.org
On Fri, 13 Mar 2026 13:43:28 +0100
Boris Brezillon <boris.brezillon@collabora.com> wrote:
> On Fri, 13 Mar 2026 13:18:35 +0100
> Boris Brezillon <boris.brezillon@collabora.com> wrote:
>
> > On Fri, 13 Mar 2026 12:04:25 +0000
> > Biju Das <biju.das.jz@bp.renesas.com> wrote:
> >
> > > > -----Original Message-----
> > > > From: dri-devel <dri-devel-bounces@lists.freedesktop.org> On Behalf Of Boris Brezillon
> > > > Sent: 13 March 2026 11:57
> > > > Subject: Re: [PATCH v4 5/6] drm/gem-shmem: Track folio accessed/dirty status in mmap
> > > >
> > > > On Fri, 13 Mar 2026 11:29:47 +0100
> > > > Thomas Zimmermann <tzimmermann@suse.de> wrote:
> > > >
> > > > > Hi
> > > > >
> > > > > Am 13.03.26 um 11:18 schrieb Boris Brezillon:
> > > > > [...]
> > > > > >>>>> + if (drm_WARN_ON(obj->dev, !shmem->pages || page_offset >= num_pages))
> > > > > >>>>> + return VM_FAULT_SIGBUS;
> > > > > >>>>> +
> > > > > >>>>> + file_update_time(vma->vm_file);
> > > > > >>>>> +
> > > > > >>>>> + folio_mark_dirty(page_folio(shmem->pages[page_offset]));
> > > > > > Do we need a folio_mark_dirty_lock() here?
> > > > >
> > > > > There is a helper for that with some documentation. [1]
> > > >
> > > > This [1] seems to solve the problem for me. Still unsure about the folio_mark_dirty_lock vs
> > > > folio_mark_dirty though.
> > > >
> > > > [1]https://yhbt.net/lore/dri-devel/20260312155027.1682606-1-pedrodemargomes@gmail.com/
> > >
> > > FYI, I used folio_mark_dirty_lock() still it does not solve the issue with weston hang.
> >
> > The patch I pointed to has nothing to do with folio_mark_dirty_lock(),
> > It's a bug caused by huge page mapping changes.
>
> Scratch that. I had a bunch of other changes on top, and it hangs again
> now that I dropped those.
Seems like it's the combination of huge pages and mkwrite that's
causing issues, if I disable huge pages, it doesn't hang...
^ permalink raw reply [flat|nested] 35+ messages in thread
* Re: [PATCH v4 5/6] drm/gem-shmem: Track folio accessed/dirty status in mmap
2026-03-13 12:55 ` Boris Brezillon
@ 2026-03-13 17:45 ` Boris Brezillon
2026-03-14 9:42 ` Biju Das
2026-03-16 8:45 ` Thomas Zimmermann
0 siblings, 2 replies; 35+ messages in thread
From: Boris Brezillon @ 2026-03-13 17:45 UTC (permalink / raw)
To: Biju Das
Cc: Thomas Zimmermann, Tommaso Merciai, loic.molinari@collabora.com,
willy@infradead.org, frank.binns@imgtec.com,
matt.coster@imgtec.com, maarten.lankhorst@linux.intel.com,
mripard@kernel.org, airlied@gmail.com, simona@ffwll.ch,
linux-mm@kvack.org, dri-devel@lists.freedesktop.org
On Fri, 13 Mar 2026 13:55:21 +0100
Boris Brezillon <boris.brezillon@collabora.com> wrote:
> On Fri, 13 Mar 2026 13:43:28 +0100
> Boris Brezillon <boris.brezillon@collabora.com> wrote:
>
> > On Fri, 13 Mar 2026 13:18:35 +0100
> > Boris Brezillon <boris.brezillon@collabora.com> wrote:
> >
> > > On Fri, 13 Mar 2026 12:04:25 +0000
> > > Biju Das <biju.das.jz@bp.renesas.com> wrote:
> > >
> > > > > -----Original Message-----
> > > > > From: dri-devel <dri-devel-bounces@lists.freedesktop.org> On Behalf Of Boris Brezillon
> > > > > Sent: 13 March 2026 11:57
> > > > > Subject: Re: [PATCH v4 5/6] drm/gem-shmem: Track folio accessed/dirty status in mmap
> > > > >
> > > > > On Fri, 13 Mar 2026 11:29:47 +0100
> > > > > Thomas Zimmermann <tzimmermann@suse.de> wrote:
> > > > >
> > > > > > Hi
> > > > > >
> > > > > > Am 13.03.26 um 11:18 schrieb Boris Brezillon:
> > > > > > [...]
> > > > > > >>>>> + if (drm_WARN_ON(obj->dev, !shmem->pages || page_offset >= num_pages))
> > > > > > >>>>> + return VM_FAULT_SIGBUS;
> > > > > > >>>>> +
> > > > > > >>>>> + file_update_time(vma->vm_file);
> > > > > > >>>>> +
> > > > > > >>>>> + folio_mark_dirty(page_folio(shmem->pages[page_offset]));
> > > > > > > Do we need a folio_mark_dirty_lock() here?
> > > > > >
> > > > > > There is a helper for that with some documentation. [1]
> > > > >
> > > > > This [1] seems to solve the problem for me. Still unsure about the folio_mark_dirty_lock vs
> > > > > folio_mark_dirty though.
> > > > >
> > > > > [1]https://yhbt.net/lore/dri-devel/20260312155027.1682606-1-pedrodemargomes@gmail.com/
> > > >
> > > > FYI, I used folio_mark_dirty_lock() still it does not solve the issue with weston hang.
> > >
> > > The patch I pointed to has nothing to do with folio_mark_dirty_lock(),
> > > It's a bug caused by huge page mapping changes.
> >
> > Scratch that. I had a bunch of other changes on top, and it hangs again
> > now that I dropped those.
>
> Seems like it's the combination of huge pages and mkwrite that's
> causing issues, if I disable huge pages, it doesn't hang...
I managed to have it working with the following diff. I still need to
check why the "map-RO-split+RW-on-demand" approach doesn't work (races
between huge_fault and pfn_mkwrite?), but I think it's okay to map the
real thing writable on the first attempt anyway (we're not trying to do
CoW here, since we're always pointing to the same page, it's just the
permissions that change). Note that there's still the race fixed by
https://yhbt.net/lore/dri-devel/20260312155027.1682606-1-pedrodemargomes@gmail.com/
in this diff, I just tried to keep the diffstat minimal.
--->8---
diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c
index 4500deef4127..4efdce5a60f0 100644
--- a/drivers/gpu/drm/drm_gem_shmem_helper.c
+++ b/drivers/gpu/drm/drm_gem_shmem_helper.c
@@ -561,9 +561,8 @@ static vm_fault_t drm_gem_shmem_try_insert_pfn_pmd(struct vm_fault *vmf, unsigne
bool aligned = (vmf->address & ~PMD_MASK) == (paddr & ~PMD_MASK);
if (aligned && pmd_none(*vmf->pmd)) {
- /* Read-only mapping; split upon write fault */
pfn &= PMD_MASK >> PAGE_SHIFT;
- return vmf_insert_pfn_pmd(vmf, pfn, false);
+ return vmf_insert_pfn_pmd(vmf, pfn, vmf->flags & FAULT_FLAG_WRITE);
}
#endif
@@ -597,8 +596,12 @@ static vm_fault_t drm_gem_shmem_fault(struct vm_fault *vmf)
pfn = page_to_pfn(page);
- if (folio_test_pmd_mappable(folio))
+ if (folio_test_pmd_mappable(folio)) {
ret = drm_gem_shmem_try_insert_pfn_pmd(vmf, pfn);
+ if (ret == VM_FAULT_NOPAGE && vmf->flags & FAULT_FLAG_WRITE)
+ folio_mark_dirty(folio);
+ }
+
if (ret != VM_FAULT_NOPAGE)
ret = vmf_insert_pfn(vma, vmf->address, pfn);
^ permalink raw reply related [flat|nested] 35+ messages in thread
* RE: [PATCH v4 5/6] drm/gem-shmem: Track folio accessed/dirty status in mmap
2026-03-13 17:45 ` Boris Brezillon
@ 2026-03-14 9:42 ` Biju Das
2026-03-19 14:17 ` Biju Das
2026-03-16 8:45 ` Thomas Zimmermann
1 sibling, 1 reply; 35+ messages in thread
From: Biju Das @ 2026-03-14 9:42 UTC (permalink / raw)
To: Boris Brezillon
Cc: Thomas Zimmermann, Tommaso Merciai, loic.molinari@collabora.com,
willy@infradead.org, frank.binns@imgtec.com,
matt.coster@imgtec.com, maarten.lankhorst@linux.intel.com,
mripard@kernel.org, airlied@gmail.com, simona@ffwll.ch,
linux-mm@kvack.org, dri-devel@lists.freedesktop.org
Hi Boris Brezillon,
> -----Original Message-----
> From: dri-devel <dri-devel-bounces@lists.freedesktop.org> On Behalf Of Boris Brezillon
> Sent: 13 March 2026 17:46
> Subject: Re: [PATCH v4 5/6] drm/gem-shmem: Track folio accessed/dirty status in mmap
>
> On Fri, 13 Mar 2026 13:55:21 +0100
> Boris Brezillon <boris.brezillon@collabora.com> wrote:
>
> > On Fri, 13 Mar 2026 13:43:28 +0100
> > Boris Brezillon <boris.brezillon@collabora.com> wrote:
> >
> > > On Fri, 13 Mar 2026 13:18:35 +0100
> > > Boris Brezillon <boris.brezillon@collabora.com> wrote:
> > >
> > > > On Fri, 13 Mar 2026 12:04:25 +0000 Biju Das
> > > > <biju.das.jz@bp.renesas.com> wrote:
> > > >
> > > > > > -----Original Message-----
> > > > > > From: dri-devel <dri-devel-bounces@lists.freedesktop.org> On
> > > > > > Behalf Of Boris Brezillon
> > > > > > Sent: 13 March 2026 11:57
> > > > > > Subject: Re: [PATCH v4 5/6] drm/gem-shmem: Track folio
> > > > > > accessed/dirty status in mmap
> > > > > >
> > > > > > On Fri, 13 Mar 2026 11:29:47 +0100 Thomas Zimmermann
> > > > > > <tzimmermann@suse.de> wrote:
> > > > > >
> > > > > > > Hi
> > > > > > >
> > > > > > > Am 13.03.26 um 11:18 schrieb Boris Brezillon:
> > > > > > > [...]
> > > > > > > >>>>> + if (drm_WARN_ON(obj->dev, !shmem->pages || page_offset >= num_pages))
> > > > > > > >>>>> + return VM_FAULT_SIGBUS;
> > > > > > > >>>>> +
> > > > > > > >>>>> + file_update_time(vma->vm_file);
> > > > > > > >>>>> +
> > > > > > > >>>>> + folio_mark_dirty(page_folio(shmem->pages[page_offset]));
> > > > > > > > Do we need a folio_mark_dirty_lock() here?
> > > > > > >
> > > > > > > There is a helper for that with some documentation. [1]
> > > > > >
> > > > > > This [1] seems to solve the problem for me. Still unsure about
> > > > > > the folio_mark_dirty_lock vs folio_mark_dirty though.
> > > > > >
> > > > > > [1]https://yhbt.net/lore/dri-devel/20260312155027.1682606-1-pedrodemargomes@gmail.com/
> > > > >
> > > > > FYI, I used folio_mark_dirty_lock() still it does not solve the issue with weston hang.
> > > >
> > > > The patch I pointed to has nothing to do with folio_mark_dirty_lock(),
> > > > It's a bug caused by huge page mapping changes.
> > >
> > > Scratch that. I had a bunch of other changes on top, and it hangs
> > > again now that I dropped those.
> >
> > Seems like it's the combination of huge pages and mkwrite that's
> > causing issues, if I disable huge pages, it doesn't hang...
>
> I managed to have it working with the following diff. I still need to check why the "map-RO-split+RW-
> on-demand" approach doesn't work (races between huge_fault and pfn_mkwrite?), but I think it's okay to
> map the real thing writable on the first attempt anyway (we're not trying to do CoW here, since we're
> always pointing to the same page, it's just the permissions that change). Note that there's still the
> race fixed by https://yhbt.net/lore/dri-devel/20260312155027.1682606-1-pedrodemargomes@gmail.com/
> in this diff, I just tried to keep the diffstat minimal.
I confirm with this diff, weston is now coming up on RZ/V2L SMARC EVK.
I am just using drm-misc-next + this diff + arm64 defconfig + Poky (Yocto Project Reference Distro) 5.0.11 + Mesa
Cheers,
Biju
>
> --->8---
> diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c
> index 4500deef4127..4efdce5a60f0 100644
> --- a/drivers/gpu/drm/drm_gem_shmem_helper.c
> +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c
> @@ -561,9 +561,8 @@ static vm_fault_t drm_gem_shmem_try_insert_pfn_pmd(struct vm_fault *vmf, unsigne
> bool aligned = (vmf->address & ~PMD_MASK) == (paddr & ~PMD_MASK);
>
> if (aligned && pmd_none(*vmf->pmd)) {
> - /* Read-only mapping; split upon write fault */
> pfn &= PMD_MASK >> PAGE_SHIFT;
> - return vmf_insert_pfn_pmd(vmf, pfn, false);
> + return vmf_insert_pfn_pmd(vmf, pfn, vmf->flags &
> + FAULT_FLAG_WRITE);
> }
> #endif
>
> @@ -597,8 +596,12 @@ static vm_fault_t drm_gem_shmem_fault(struct vm_fault *vmf)
>
> pfn = page_to_pfn(page);
>
> - if (folio_test_pmd_mappable(folio))
> + if (folio_test_pmd_mappable(folio)) {
> ret = drm_gem_shmem_try_insert_pfn_pmd(vmf, pfn);
> + if (ret == VM_FAULT_NOPAGE && vmf->flags & FAULT_FLAG_WRITE)
> + folio_mark_dirty(folio);
> + }
> +
> if (ret != VM_FAULT_NOPAGE)
> ret = vmf_insert_pfn(vma, vmf->address, pfn);
>
^ permalink raw reply [flat|nested] 35+ messages in thread
* Re: [PATCH v4 5/6] drm/gem-shmem: Track folio accessed/dirty status in mmap
2026-03-13 17:45 ` Boris Brezillon
2026-03-14 9:42 ` Biju Das
@ 2026-03-16 8:45 ` Thomas Zimmermann
2026-03-16 9:36 ` Boris Brezillon
2026-03-16 15:30 ` Boris Brezillon
1 sibling, 2 replies; 35+ messages in thread
From: Thomas Zimmermann @ 2026-03-16 8:45 UTC (permalink / raw)
To: Boris Brezillon, Biju Das
Cc: Tommaso Merciai, loic.molinari@collabora.com, willy@infradead.org,
frank.binns@imgtec.com, matt.coster@imgtec.com,
maarten.lankhorst@linux.intel.com, mripard@kernel.org,
airlied@gmail.com, simona@ffwll.ch, linux-mm@kvack.org,
dri-devel@lists.freedesktop.org
Hi Boris,
thanks for investigating this problem.
Am 13.03.26 um 18:45 schrieb Boris Brezillon:
> On Fri, 13 Mar 2026 13:55:21 +0100
> Boris Brezillon <boris.brezillon@collabora.com> wrote:
>
>> On Fri, 13 Mar 2026 13:43:28 +0100
>> Boris Brezillon <boris.brezillon@collabora.com> wrote:
>>
>>> On Fri, 13 Mar 2026 13:18:35 +0100
>>> Boris Brezillon <boris.brezillon@collabora.com> wrote:
>>>
>>>> On Fri, 13 Mar 2026 12:04:25 +0000
>>>> Biju Das <biju.das.jz@bp.renesas.com> wrote:
>>>>
>>>>>> -----Original Message-----
>>>>>> From: dri-devel <dri-devel-bounces@lists.freedesktop.org> On Behalf Of Boris Brezillon
>>>>>> Sent: 13 March 2026 11:57
>>>>>> Subject: Re: [PATCH v4 5/6] drm/gem-shmem: Track folio accessed/dirty status in mmap
>>>>>>
>>>>>> On Fri, 13 Mar 2026 11:29:47 +0100
>>>>>> Thomas Zimmermann <tzimmermann@suse.de> wrote:
>>>>>>
>>>>>>> Hi
>>>>>>>
>>>>>>> Am 13.03.26 um 11:18 schrieb Boris Brezillon:
>>>>>>> [...]
>>>>>>>>>>>> + if (drm_WARN_ON(obj->dev, !shmem->pages || page_offset >= num_pages))
>>>>>>>>>>>> + return VM_FAULT_SIGBUS;
>>>>>>>>>>>> +
>>>>>>>>>>>> + file_update_time(vma->vm_file);
>>>>>>>>>>>> +
>>>>>>>>>>>> + folio_mark_dirty(page_folio(shmem->pages[page_offset]));
>>>>>>>> Do we need a folio_mark_dirty_lock() here?
>>>>>>> There is a helper for that with some documentation. [1]
>>>>>> This [1] seems to solve the problem for me. Still unsure about the folio_mark_dirty_lock vs
>>>>>> folio_mark_dirty though.
>>>>>>
>>>>>> [1]https://yhbt.net/lore/dri-devel/20260312155027.1682606-1-pedrodemargomes@gmail.com/
>>>>> FYI, I used folio_mark_dirty_lock() still it does not solve the issue with weston hang.
>>>> The patch I pointed to has nothing to do with folio_mark_dirty_lock(),
>>>> It's a bug caused by huge page mapping changes.
>>> Scratch that. I had a bunch of other changes on top, and it hangs again
>>> now that I dropped those.
>> Seems like it's the combination of huge pages and mkwrite that's
>> causing issues, if I disable huge pages, it doesn't hang...
> I managed to have it working with the following diff. I still need to
> check why the "map-RO-split+RW-on-demand" approach doesn't work (races
> between huge_fault and pfn_mkwrite?), but I think it's okay to map the
> real thing writable on the first attempt anyway (we're not trying to do
> CoW here, since we're always pointing to the same page, it's just the
> permissions that change). Note that there's still the race fixed by
> https://yhbt.net/lore/dri-devel/20260312155027.1682606-1-pedrodemargomes@gmail.com/
> in this diff, I just tried to keep the diffstat minimal.
>
> --->8---
> diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c
> index 4500deef4127..4efdce5a60f0 100644
> --- a/drivers/gpu/drm/drm_gem_shmem_helper.c
> +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c
> @@ -561,9 +561,8 @@ static vm_fault_t drm_gem_shmem_try_insert_pfn_pmd(struct vm_fault *vmf, unsigne
> bool aligned = (vmf->address & ~PMD_MASK) == (paddr & ~PMD_MASK);
>
> if (aligned && pmd_none(*vmf->pmd)) {
> - /* Read-only mapping; split upon write fault */
> pfn &= PMD_MASK >> PAGE_SHIFT;
> - return vmf_insert_pfn_pmd(vmf, pfn, false);
> + return vmf_insert_pfn_pmd(vmf, pfn, vmf->flags & FAULT_FLAG_WRITE);
> }
> #endif
>
> @@ -597,8 +596,12 @@ static vm_fault_t drm_gem_shmem_fault(struct vm_fault *vmf)
>
> pfn = page_to_pfn(page);
>
> - if (folio_test_pmd_mappable(folio))
> + if (folio_test_pmd_mappable(folio)) {
> ret = drm_gem_shmem_try_insert_pfn_pmd(vmf, pfn);
> + if (ret == VM_FAULT_NOPAGE && vmf->flags & FAULT_FLAG_WRITE)
> + folio_mark_dirty(folio);
> + }
> +
> if (ret != VM_FAULT_NOPAGE)
> ret = vmf_insert_pfn(vma, vmf->address, pfn);
All these branches with NOPAGE seem confusing. Can we change the overall
logic here? Something like:
if (folio_test_pmd_mappable()) {
ret = try_insert_pfn_pmd()
if (ret == VM_FAULT_NOPAGE) {
folio_mark_accessed()
if (flags & FLAG_WRITE)
folio_mark_dirty()
goto out;
}
}
ret = vmf_insert_pfn()
if (ret == NOPAGE)
folio_mark_accesed()
out:
...
This would keep the huge-page code within the first branch. And if it
fails, it still does the regular page fault.
Best regards
Thomas
>
--
--
Thomas Zimmermann
Graphics Driver Developer
SUSE Software Solutions Germany GmbH
Frankenstr. 146, 90461 Nürnberg, Germany, www.suse.com
GF: Jochen Jaser, Andrew McDonald, Werner Knoblich, (HRB 36809, AG Nürnberg)
^ permalink raw reply [flat|nested] 35+ messages in thread
* Re: [PATCH v4 5/6] drm/gem-shmem: Track folio accessed/dirty status in mmap
2026-03-16 8:45 ` Thomas Zimmermann
@ 2026-03-16 9:36 ` Boris Brezillon
2026-03-16 10:22 ` Thomas Zimmermann
2026-03-16 15:30 ` Boris Brezillon
1 sibling, 1 reply; 35+ messages in thread
From: Boris Brezillon @ 2026-03-16 9:36 UTC (permalink / raw)
To: Thomas Zimmermann
Cc: Biju Das, Tommaso Merciai, loic.molinari@collabora.com,
willy@infradead.org, frank.binns@imgtec.com,
matt.coster@imgtec.com, maarten.lankhorst@linux.intel.com,
mripard@kernel.org, airlied@gmail.com, simona@ffwll.ch,
linux-mm@kvack.org, dri-devel@lists.freedesktop.org
On Mon, 16 Mar 2026 09:45:49 +0100
Thomas Zimmermann <tzimmermann@suse.de> wrote:
> Hi Boris,
>
> thanks for investigating this problem.
>
> Am 13.03.26 um 18:45 schrieb Boris Brezillon:
> > On Fri, 13 Mar 2026 13:55:21 +0100
> > Boris Brezillon <boris.brezillon@collabora.com> wrote:
> >
> >> On Fri, 13 Mar 2026 13:43:28 +0100
> >> Boris Brezillon <boris.brezillon@collabora.com> wrote:
> >>
> >>> On Fri, 13 Mar 2026 13:18:35 +0100
> >>> Boris Brezillon <boris.brezillon@collabora.com> wrote:
> >>>
> >>>> On Fri, 13 Mar 2026 12:04:25 +0000
> >>>> Biju Das <biju.das.jz@bp.renesas.com> wrote:
> >>>>
> >>>>>> -----Original Message-----
> >>>>>> From: dri-devel <dri-devel-bounces@lists.freedesktop.org> On Behalf Of Boris Brezillon
> >>>>>> Sent: 13 March 2026 11:57
> >>>>>> Subject: Re: [PATCH v4 5/6] drm/gem-shmem: Track folio accessed/dirty status in mmap
> >>>>>>
> >>>>>> On Fri, 13 Mar 2026 11:29:47 +0100
> >>>>>> Thomas Zimmermann <tzimmermann@suse.de> wrote:
> >>>>>>
> >>>>>>> Hi
> >>>>>>>
> >>>>>>> Am 13.03.26 um 11:18 schrieb Boris Brezillon:
> >>>>>>> [...]
> >>>>>>>>>>>> + if (drm_WARN_ON(obj->dev, !shmem->pages || page_offset >= num_pages))
> >>>>>>>>>>>> + return VM_FAULT_SIGBUS;
> >>>>>>>>>>>> +
> >>>>>>>>>>>> + file_update_time(vma->vm_file);
> >>>>>>>>>>>> +
> >>>>>>>>>>>> + folio_mark_dirty(page_folio(shmem->pages[page_offset]));
> >>>>>>>> Do we need a folio_mark_dirty_lock() here?
> >>>>>>> There is a helper for that with some documentation. [1]
> >>>>>> This [1] seems to solve the problem for me. Still unsure about the folio_mark_dirty_lock vs
> >>>>>> folio_mark_dirty though.
> >>>>>>
> >>>>>> [1]https://yhbt.net/lore/dri-devel/20260312155027.1682606-1-pedrodemargomes@gmail.com/
> >>>>> FYI, I used folio_mark_dirty_lock() still it does not solve the issue with weston hang.
> >>>> The patch I pointed to has nothing to do with folio_mark_dirty_lock(),
> >>>> It's a bug caused by huge page mapping changes.
> >>> Scratch that. I had a bunch of other changes on top, and it hangs again
> >>> now that I dropped those.
> >> Seems like it's the combination of huge pages and mkwrite that's
> >> causing issues, if I disable huge pages, it doesn't hang...
> > I managed to have it working with the following diff. I still need to
> > check why the "map-RO-split+RW-on-demand" approach doesn't work (races
> > between huge_fault and pfn_mkwrite?), but I think it's okay to map the
> > real thing writable on the first attempt anyway (we're not trying to do
> > CoW here, since we're always pointing to the same page, it's just the
> > permissions that change). Note that there's still the race fixed by
> > https://yhbt.net/lore/dri-devel/20260312155027.1682606-1-pedrodemargomes@gmail.com/
> > in this diff, I just tried to keep the diffstat minimal.
> >
> > --->8---
> > diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c
> > index 4500deef4127..4efdce5a60f0 100644
> > --- a/drivers/gpu/drm/drm_gem_shmem_helper.c
> > +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c
> > @@ -561,9 +561,8 @@ static vm_fault_t drm_gem_shmem_try_insert_pfn_pmd(struct vm_fault *vmf, unsigne
> > bool aligned = (vmf->address & ~PMD_MASK) == (paddr & ~PMD_MASK);
> >
> > if (aligned && pmd_none(*vmf->pmd)) {
> > - /* Read-only mapping; split upon write fault */
> > pfn &= PMD_MASK >> PAGE_SHIFT;
> > - return vmf_insert_pfn_pmd(vmf, pfn, false);
> > + return vmf_insert_pfn_pmd(vmf, pfn, vmf->flags & FAULT_FLAG_WRITE);
> > }
> > #endif
> >
> > @@ -597,8 +596,12 @@ static vm_fault_t drm_gem_shmem_fault(struct vm_fault *vmf)
> >
> > pfn = page_to_pfn(page);
> >
> > - if (folio_test_pmd_mappable(folio))
> > + if (folio_test_pmd_mappable(folio)) {
> > ret = drm_gem_shmem_try_insert_pfn_pmd(vmf, pfn);
> > + if (ret == VM_FAULT_NOPAGE && vmf->flags & FAULT_FLAG_WRITE)
> > + folio_mark_dirty(folio);
> > + }
> > +
> > if (ret != VM_FAULT_NOPAGE)
> > ret = vmf_insert_pfn(vma, vmf->address, pfn);
>
> All these branches with NOPAGE seem confusing. Can we change the overall
> logic here? Something like:
>
> if (folio_test_pmd_mappable()) {
> ret = try_insert_pfn_pmd()
> if (ret == VM_FAULT_NOPAGE) {
> folio_mark_accessed()
> if (flags & FLAG_WRITE)
> folio_mark_dirty()
> goto out;
> }
> }
>
> ret = vmf_insert_pfn()
> if (ret == NOPAGE)
> folio_mark_accesed()
>
> out:
> ...
>
>
> This would keep the huge-page code within the first branch. And if it
> fails, it still does the regular page fault.
Well, in practice that's not what we want anyway (see the other fix for
the huge_fault vs regular fault race), so I'd still be inclined to have
the folio_mark_accessed() handled in the common path and have the pmd
vs regular pfn insertion in some if/else branches. Something like that:
if (pmd_insert)
ret = drm_gem_shmem_try_insert_pfn_pmd(vmf, pfn);
else
ret = vmf_insert_pfn(vma, vmf->address, pfn);
if (ret == VM_FAULT_NOPAGE) {
folio_mark_accesed(folio);
/* Normal pages are mapped RO, and remapped RW afterwards. */
if (pmd_insert && vmf->flags & FAULT_FLAG_WRITE)
folio_mark_dirty(folio);
}
^ permalink raw reply [flat|nested] 35+ messages in thread
* Re: [PATCH v4 5/6] drm/gem-shmem: Track folio accessed/dirty status in mmap
2026-03-16 9:36 ` Boris Brezillon
@ 2026-03-16 10:22 ` Thomas Zimmermann
2026-03-16 10:53 ` Boris Brezillon
0 siblings, 1 reply; 35+ messages in thread
From: Thomas Zimmermann @ 2026-03-16 10:22 UTC (permalink / raw)
To: Boris Brezillon
Cc: Biju Das, Tommaso Merciai, loic.molinari@collabora.com,
willy@infradead.org, frank.binns@imgtec.com,
matt.coster@imgtec.com, maarten.lankhorst@linux.intel.com,
mripard@kernel.org, airlied@gmail.com, simona@ffwll.ch,
linux-mm@kvack.org, dri-devel@lists.freedesktop.org
Hi
Am 16.03.26 um 10:36 schrieb Boris Brezillon:
[...]
>> This would keep the huge-page code within the first branch. And if it
>> fails, it still does the regular page fault.
> Well, in practice that's not what we want anyway (see the other fix for
> the huge_fault vs regular fault race), so I'd still be inclined to have
> the folio_mark_accessed() handled in the common path and have the pmd
> vs regular pfn insertion in some if/else branches. Something like that:
>
> if (pmd_insert)
> ret = drm_gem_shmem_try_insert_pfn_pmd(vmf, pfn);
> else
> ret = vmf_insert_pfn(vma, vmf->address, pfn);
>
> if (ret == VM_FAULT_NOPAGE) {
> folio_mark_accesed(folio);
>
> /* Normal pages are mapped RO, and remapped RW afterwards. */
> if (pmd_insert && vmf->flags & FAULT_FLAG_WRITE)
This style mixes conditions from different branching depths. The first
outermost branch uses pmd_insert to compute ret. Any then both have
changed places, so that ret is in the outer branch and pmd_insert is in
the inner branch. This is hard to maintain. It is already confusing now
and will be even more so to anyone locking at that code later on.
Best regards
Thomas
> folio_mark_dirty(folio);
> }
--
--
Thomas Zimmermann
Graphics Driver Developer
SUSE Software Solutions Germany GmbH
Frankenstr. 146, 90461 Nürnberg, Germany, www.suse.com
GF: Jochen Jaser, Andrew McDonald, Werner Knoblich, (HRB 36809, AG Nürnberg)
^ permalink raw reply [flat|nested] 35+ messages in thread
* Re: [PATCH v4 5/6] drm/gem-shmem: Track folio accessed/dirty status in mmap
2026-03-16 10:22 ` Thomas Zimmermann
@ 2026-03-16 10:53 ` Boris Brezillon
0 siblings, 0 replies; 35+ messages in thread
From: Boris Brezillon @ 2026-03-16 10:53 UTC (permalink / raw)
To: Thomas Zimmermann
Cc: Biju Das, Tommaso Merciai, loic.molinari@collabora.com,
willy@infradead.org, frank.binns@imgtec.com,
matt.coster@imgtec.com, maarten.lankhorst@linux.intel.com,
mripard@kernel.org, airlied@gmail.com, simona@ffwll.ch,
linux-mm@kvack.org, dri-devel@lists.freedesktop.org
On Mon, 16 Mar 2026 11:22:49 +0100
Thomas Zimmermann <tzimmermann@suse.de> wrote:
> Hi
>
> Am 16.03.26 um 10:36 schrieb Boris Brezillon:
> [...]
> >> This would keep the huge-page code within the first branch. And if it
> >> fails, it still does the regular page fault.
> > Well, in practice that's not what we want anyway (see the other fix for
> > the huge_fault vs regular fault race), so I'd still be inclined to have
> > the folio_mark_accessed() handled in the common path and have the pmd
> > vs regular pfn insertion in some if/else branches. Something like that:
> >
> > if (pmd_insert)
> > ret = drm_gem_shmem_try_insert_pfn_pmd(vmf, pfn);
> > else
> > ret = vmf_insert_pfn(vma, vmf->address, pfn);
> >
> > if (ret == VM_FAULT_NOPAGE) {
> > folio_mark_accesed(folio);
> >
> > /* Normal pages are mapped RO, and remapped RW afterwards. */
> > if (pmd_insert && vmf->flags & FAULT_FLAG_WRITE)
>
> This style mixes conditions from different branching depths. The first
> outermost branch uses pmd_insert to compute ret. Any then both have
> changed places, so that ret is in the outer branch and pmd_insert is in
> the inner branch. This is hard to maintain. It is already confusing now
> and will be even more so to anyone locking at that code later on.
I guess that's another occurrence of us disagreeing on what's
easy/uneasy to maintain :P. I find it way easier to group things by
functionality (here that would be folio state tracking) at the cost of
having conditionals repeated (I trust the compiler to do the proper
optimization) than having multiple paths doing exactly the same thing.
The latter easily leads to one path being updated while the other path
is left behind when new features/fixes are proposed.
^ permalink raw reply [flat|nested] 35+ messages in thread
* Re: [PATCH v4 5/6] drm/gem-shmem: Track folio accessed/dirty status in mmap
2026-03-16 8:45 ` Thomas Zimmermann
2026-03-16 9:36 ` Boris Brezillon
@ 2026-03-16 15:30 ` Boris Brezillon
1 sibling, 0 replies; 35+ messages in thread
From: Boris Brezillon @ 2026-03-16 15:30 UTC (permalink / raw)
To: Thomas Zimmermann
Cc: Biju Das, Tommaso Merciai, loic.molinari@collabora.com,
willy@infradead.org, frank.binns@imgtec.com,
matt.coster@imgtec.com, maarten.lankhorst@linux.intel.com,
mripard@kernel.org, airlied@gmail.com, simona@ffwll.ch,
linux-mm@kvack.org, dri-devel@lists.freedesktop.org
On Mon, 16 Mar 2026 09:45:49 +0100
Thomas Zimmermann <tzimmermann@suse.de> wrote:
> Hi Boris,
>
> thanks for investigating this problem.
>
> Am 13.03.26 um 18:45 schrieb Boris Brezillon:
> > On Fri, 13 Mar 2026 13:55:21 +0100
> > Boris Brezillon <boris.brezillon@collabora.com> wrote:
> >
> >> On Fri, 13 Mar 2026 13:43:28 +0100
> >> Boris Brezillon <boris.brezillon@collabora.com> wrote:
> >>
> >>> On Fri, 13 Mar 2026 13:18:35 +0100
> >>> Boris Brezillon <boris.brezillon@collabora.com> wrote:
> >>>
> >>>> On Fri, 13 Mar 2026 12:04:25 +0000
> >>>> Biju Das <biju.das.jz@bp.renesas.com> wrote:
> >>>>
> >>>>>> -----Original Message-----
> >>>>>> From: dri-devel <dri-devel-bounces@lists.freedesktop.org> On Behalf Of Boris Brezillon
> >>>>>> Sent: 13 March 2026 11:57
> >>>>>> Subject: Re: [PATCH v4 5/6] drm/gem-shmem: Track folio accessed/dirty status in mmap
> >>>>>>
> >>>>>> On Fri, 13 Mar 2026 11:29:47 +0100
> >>>>>> Thomas Zimmermann <tzimmermann@suse.de> wrote:
> >>>>>>
> >>>>>>> Hi
> >>>>>>>
> >>>>>>> Am 13.03.26 um 11:18 schrieb Boris Brezillon:
> >>>>>>> [...]
> >>>>>>>>>>>> + if (drm_WARN_ON(obj->dev, !shmem->pages || page_offset >= num_pages))
> >>>>>>>>>>>> + return VM_FAULT_SIGBUS;
> >>>>>>>>>>>> +
> >>>>>>>>>>>> + file_update_time(vma->vm_file);
> >>>>>>>>>>>> +
> >>>>>>>>>>>> + folio_mark_dirty(page_folio(shmem->pages[page_offset]));
> >>>>>>>> Do we need a folio_mark_dirty_lock() here?
> >>>>>>> There is a helper for that with some documentation. [1]
> >>>>>> This [1] seems to solve the problem for me. Still unsure about the folio_mark_dirty_lock vs
> >>>>>> folio_mark_dirty though.
> >>>>>>
> >>>>>> [1]https://yhbt.net/lore/dri-devel/20260312155027.1682606-1-pedrodemargomes@gmail.com/
> >>>>> FYI, I used folio_mark_dirty_lock() still it does not solve the issue with weston hang.
> >>>> The patch I pointed to has nothing to do with folio_mark_dirty_lock(),
> >>>> It's a bug caused by huge page mapping changes.
> >>> Scratch that. I had a bunch of other changes on top, and it hangs again
> >>> now that I dropped those.
> >> Seems like it's the combination of huge pages and mkwrite that's
> >> causing issues, if I disable huge pages, it doesn't hang...
> > I managed to have it working with the following diff. I still need to
> > check why the "map-RO-split+RW-on-demand" approach doesn't work (races
> > between huge_fault and pfn_mkwrite?), but I think it's okay to map the
> > real thing writable on the first attempt anyway (we're not trying to do
> > CoW here, since we're always pointing to the same page, it's just the
> > permissions that change). Note that there's still the race fixed by
> > https://yhbt.net/lore/dri-devel/20260312155027.1682606-1-pedrodemargomes@gmail.com/
> > in this diff, I just tried to keep the diffstat minimal.
^ "that's not present in this diff"
sorry for the confusion.
Aside from that, I've been looking more closely at the code in
mm/memory.c, and other implementations of .pfn_mkwrite(), and I'm still
not confident that:
- we can call folio_mark_dirty() without the folio lock held in that
path unless we have the GEM resv lock held (the pte lock is released,
and I'm not sure there's anything else holding on the folio).
- we can claim that the huge vs normal-page paths are race-free. That's
probably okay as long as we only do the dirty bookkeeping in
pfn_mkwrite (we might flag the folio dirty before we know the
writeable mapping has been setup propertly, but that's probably
okay). What worries me a bit is the fact most implementations call
their fault handler from pfn_mkwrite() and do the page table update
from there. There's also this comment [1] that makes me doubt we're
doing the right thing here.
Would be good if someone from MM could chime in and shed some light
on what's supposed to happen in pfn_mkwrite (Matthew, perhaps?).
[1]https://elixir.bootlin.com/linux/v6.17.2/source/fs/xfs/xfs_file.c#L1899
^ permalink raw reply [flat|nested] 35+ messages in thread
* RE: [PATCH v4 5/6] drm/gem-shmem: Track folio accessed/dirty status in mmap
2026-03-14 9:42 ` Biju Das
@ 2026-03-19 14:17 ` Biju Das
2026-03-19 14:50 ` Boris Brezillon
0 siblings, 1 reply; 35+ messages in thread
From: Biju Das @ 2026-03-19 14:17 UTC (permalink / raw)
To: Boris Brezillon
Cc: Thomas Zimmermann, Tommaso Merciai, loic.molinari@collabora.com,
willy@infradead.org, frank.binns@imgtec.com,
matt.coster@imgtec.com, maarten.lankhorst@linux.intel.com,
mripard@kernel.org, airlied@gmail.com, simona@ffwll.ch,
linux-mm@kvack.org, dri-devel@lists.freedesktop.org
Hi All,
> -----Original Message-----
> From: Biju Das
> Sent: 14 March 2026 09:43
> Subject: RE: [PATCH v4 5/6] drm/gem-shmem: Track folio accessed/dirty status in mmap
>
> Hi Boris Brezillon,
>
> > -----Original Message-----
> > From: dri-devel <dri-devel-bounces@lists.freedesktop.org> On Behalf Of
> > Boris Brezillon
> > Sent: 13 March 2026 17:46
> > Subject: Re: [PATCH v4 5/6] drm/gem-shmem: Track folio accessed/dirty
> > status in mmap
> >
> > On Fri, 13 Mar 2026 13:55:21 +0100
> > Boris Brezillon <boris.brezillon@collabora.com> wrote:
> >
> > > On Fri, 13 Mar 2026 13:43:28 +0100
> > > Boris Brezillon <boris.brezillon@collabora.com> wrote:
> > >
> > > > On Fri, 13 Mar 2026 13:18:35 +0100 Boris Brezillon
> > > > <boris.brezillon@collabora.com> wrote:
> > > >
> > > > > On Fri, 13 Mar 2026 12:04:25 +0000 Biju Das
> > > > > <biju.das.jz@bp.renesas.com> wrote:
> > > > >
> > > > > > > -----Original Message-----
> > > > > > > From: dri-devel <dri-devel-bounces@lists.freedesktop.org> On
> > > > > > > Behalf Of Boris Brezillon
> > > > > > > Sent: 13 March 2026 11:57
> > > > > > > Subject: Re: [PATCH v4 5/6] drm/gem-shmem: Track folio
> > > > > > > accessed/dirty status in mmap
> > > > > > >
> > > > > > > On Fri, 13 Mar 2026 11:29:47 +0100 Thomas Zimmermann
> > > > > > > <tzimmermann@suse.de> wrote:
> > > > > > >
> > > > > > > > Hi
> > > > > > > >
> > > > > > > > Am 13.03.26 um 11:18 schrieb Boris Brezillon:
> > > > > > > > [...]
> > > > > > > > >>>>> + if (drm_WARN_ON(obj->dev, !shmem->pages || page_offset >= num_pages))
> > > > > > > > >>>>> + return VM_FAULT_SIGBUS;
> > > > > > > > >>>>> +
> > > > > > > > >>>>> + file_update_time(vma->vm_file);
> > > > > > > > >>>>> +
> > > > > > > > >>>>> +
> > > > > > > > >>>>> +folio_mark_dirty(page_folio(shmem->pages[page_offse
> > > > > > > > >>>>> +t]));
> > > > > > > > > Do we need a folio_mark_dirty_lock() here?
> > > > > > > >
> > > > > > > > There is a helper for that with some documentation. [1]
> > > > > > >
> > > > > > > This [1] seems to solve the problem for me. Still unsure
> > > > > > > about the folio_mark_dirty_lock vs folio_mark_dirty though.
> > > > > > >
> > > > > > > [1]https://yhbt.net/lore/dri-devel/20260312155027.1682606-1-
> > > > > > > pedrodemargomes@gmail.com/
> > > > > >
> > > > > > FYI, I used folio_mark_dirty_lock() still it does not solve the issue with weston hang.
> > > > >
> > > > > The patch I pointed to has nothing to do with
> > > > > folio_mark_dirty_lock(), It's a bug caused by huge page mapping changes.
> > > >
> > > > Scratch that. I had a bunch of other changes on top, and it hangs
> > > > again now that I dropped those.
> > >
> > > Seems like it's the combination of huge pages and mkwrite that's
> > > causing issues, if I disable huge pages, it doesn't hang...
> >
> > I managed to have it working with the following diff. I still need to
> > check why the "map-RO-split+RW- on-demand" approach doesn't work
> > (races between huge_fault and pfn_mkwrite?), but I think it's okay to
> > map the real thing writable on the first attempt anyway (we're not
> > trying to do CoW here, since we're always pointing to the same page,
> > it's just the permissions that change). Note that there's still the
> > race fixed by
> > https://yhbt.net/lore/dri-devel/20260312155027.1682606-1-pedrodemargom
> > es@gmail.com/ in this diff, I just tried to keep the diffstat minimal.
>
> I confirm with this diff, weston is now coming up on RZ/V2L SMARC EVK.
>
> I am just using drm-misc-next + this diff + arm64 defconfig + Poky (Yocto Project Reference Distro)
> 5.0.11 + Mesa
>
> Cheers,
> Biju
>
> >
> > --->8---
> > diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c
> > b/drivers/gpu/drm/drm_gem_shmem_helper.c
> > index 4500deef4127..4efdce5a60f0 100644
> > --- a/drivers/gpu/drm/drm_gem_shmem_helper.c
> > +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c
> > @@ -561,9 +561,8 @@ static vm_fault_t drm_gem_shmem_try_insert_pfn_pmd(struct vm_fault *vmf, unsigne
> > bool aligned = (vmf->address & ~PMD_MASK) == (paddr &
> > ~PMD_MASK);
> >
> > if (aligned && pmd_none(*vmf->pmd)) {
> > - /* Read-only mapping; split upon write fault */
> > pfn &= PMD_MASK >> PAGE_SHIFT;
> > - return vmf_insert_pfn_pmd(vmf, pfn, false);
> > + return vmf_insert_pfn_pmd(vmf, pfn, vmf->flags &
> > + FAULT_FLAG_WRITE);
> > }
> > #endif
> >
> > @@ -597,8 +596,12 @@ static vm_fault_t drm_gem_shmem_fault(struct
> > vm_fault *vmf)
> >
> > pfn = page_to_pfn(page);
> >
> > - if (folio_test_pmd_mappable(folio))
> > + if (folio_test_pmd_mappable(folio)) {
> > ret = drm_gem_shmem_try_insert_pfn_pmd(vmf, pfn);
> > + if (ret == VM_FAULT_NOPAGE && vmf->flags & FAULT_FLAG_WRITE)
> > + folio_mark_dirty(folio);
> > + }
> > +
> > if (ret != VM_FAULT_NOPAGE)
> > ret = vmf_insert_pfn(vma, vmf->address, pfn);
> >
Any patch available to fix the weston issue? Still the issue is present with drm-misc-next
Please let me know.
Cheers,
Biju
^ permalink raw reply [flat|nested] 35+ messages in thread
* Re: [PATCH v4 5/6] drm/gem-shmem: Track folio accessed/dirty status in mmap
2026-03-19 14:17 ` Biju Das
@ 2026-03-19 14:50 ` Boris Brezillon
2026-03-19 14:53 ` Biju Das
0 siblings, 1 reply; 35+ messages in thread
From: Boris Brezillon @ 2026-03-19 14:50 UTC (permalink / raw)
To: Biju Das
Cc: Thomas Zimmermann, Tommaso Merciai, loic.molinari@collabora.com,
willy@infradead.org, frank.binns@imgtec.com,
matt.coster@imgtec.com, maarten.lankhorst@linux.intel.com,
mripard@kernel.org, airlied@gmail.com, simona@ffwll.ch,
linux-mm@kvack.org, dri-devel@lists.freedesktop.org
On Thu, 19 Mar 2026 14:17:31 +0000
Biju Das <biju.das.jz@bp.renesas.com> wrote:
> Hi All,
>
> > -----Original Message-----
> > From: Biju Das
> > Sent: 14 March 2026 09:43
> > Subject: RE: [PATCH v4 5/6] drm/gem-shmem: Track folio accessed/dirty status in mmap
> >
> > Hi Boris Brezillon,
> >
> > > -----Original Message-----
> > > From: dri-devel <dri-devel-bounces@lists.freedesktop.org> On Behalf Of
> > > Boris Brezillon
> > > Sent: 13 March 2026 17:46
> > > Subject: Re: [PATCH v4 5/6] drm/gem-shmem: Track folio accessed/dirty
> > > status in mmap
> > >
> > > On Fri, 13 Mar 2026 13:55:21 +0100
> > > Boris Brezillon <boris.brezillon@collabora.com> wrote:
> > >
> > > > On Fri, 13 Mar 2026 13:43:28 +0100
> > > > Boris Brezillon <boris.brezillon@collabora.com> wrote:
> > > >
> > > > > On Fri, 13 Mar 2026 13:18:35 +0100 Boris Brezillon
> > > > > <boris.brezillon@collabora.com> wrote:
> > > > >
> > > > > > On Fri, 13 Mar 2026 12:04:25 +0000 Biju Das
> > > > > > <biju.das.jz@bp.renesas.com> wrote:
> > > > > >
> > > > > > > > -----Original Message-----
> > > > > > > > From: dri-devel <dri-devel-bounces@lists.freedesktop.org> On
> > > > > > > > Behalf Of Boris Brezillon
> > > > > > > > Sent: 13 March 2026 11:57
> > > > > > > > Subject: Re: [PATCH v4 5/6] drm/gem-shmem: Track folio
> > > > > > > > accessed/dirty status in mmap
> > > > > > > >
> > > > > > > > On Fri, 13 Mar 2026 11:29:47 +0100 Thomas Zimmermann
> > > > > > > > <tzimmermann@suse.de> wrote:
> > > > > > > >
> > > > > > > > > Hi
> > > > > > > > >
> > > > > > > > > Am 13.03.26 um 11:18 schrieb Boris Brezillon:
> > > > > > > > > [...]
> > > > > > > > > >>>>> + if (drm_WARN_ON(obj->dev, !shmem->pages || page_offset >= num_pages))
> > > > > > > > > >>>>> + return VM_FAULT_SIGBUS;
> > > > > > > > > >>>>> +
> > > > > > > > > >>>>> + file_update_time(vma->vm_file);
> > > > > > > > > >>>>> +
> > > > > > > > > >>>>> +
> > > > > > > > > >>>>> +folio_mark_dirty(page_folio(shmem->pages[page_offse
> > > > > > > > > >>>>> +t]));
> > > > > > > > > > Do we need a folio_mark_dirty_lock() here?
> > > > > > > > >
> > > > > > > > > There is a helper for that with some documentation. [1]
> > > > > > > >
> > > > > > > > This [1] seems to solve the problem for me. Still unsure
> > > > > > > > about the folio_mark_dirty_lock vs folio_mark_dirty though.
> > > > > > > >
> > > > > > > > [1]https://yhbt.net/lore/dri-devel/20260312155027.1682606-1-
> > > > > > > > pedrodemargomes@gmail.com/
> > > > > > >
> > > > > > > FYI, I used folio_mark_dirty_lock() still it does not solve the issue with weston hang.
> > > > > >
> > > > > > The patch I pointed to has nothing to do with
> > > > > > folio_mark_dirty_lock(), It's a bug caused by huge page mapping changes.
> > > > >
> > > > > Scratch that. I had a bunch of other changes on top, and it hangs
> > > > > again now that I dropped those.
> > > >
> > > > Seems like it's the combination of huge pages and mkwrite that's
> > > > causing issues, if I disable huge pages, it doesn't hang...
> > >
> > > I managed to have it working with the following diff. I still need to
> > > check why the "map-RO-split+RW- on-demand" approach doesn't work
> > > (races between huge_fault and pfn_mkwrite?), but I think it's okay to
> > > map the real thing writable on the first attempt anyway (we're not
> > > trying to do CoW here, since we're always pointing to the same page,
> > > it's just the permissions that change). Note that there's still the
> > > race fixed by
> > > https://yhbt.net/lore/dri-devel/20260312155027.1682606-1-pedrodemargom
> > > es@gmail.com/ in this diff, I just tried to keep the diffstat minimal.
> >
> > I confirm with this diff, weston is now coming up on RZ/V2L SMARC EVK.
> >
> > I am just using drm-misc-next + this diff + arm64 defconfig + Poky (Yocto Project Reference Distro)
> > 5.0.11 + Mesa
> >
> > Cheers,
> > Biju
> >
> > >
> > > --->8---
> > > diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c
> > > b/drivers/gpu/drm/drm_gem_shmem_helper.c
> > > index 4500deef4127..4efdce5a60f0 100644
> > > --- a/drivers/gpu/drm/drm_gem_shmem_helper.c
> > > +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c
> > > @@ -561,9 +561,8 @@ static vm_fault_t drm_gem_shmem_try_insert_pfn_pmd(struct vm_fault *vmf, unsigne
> > > bool aligned = (vmf->address & ~PMD_MASK) == (paddr &
> > > ~PMD_MASK);
> > >
> > > if (aligned && pmd_none(*vmf->pmd)) {
> > > - /* Read-only mapping; split upon write fault */
> > > pfn &= PMD_MASK >> PAGE_SHIFT;
> > > - return vmf_insert_pfn_pmd(vmf, pfn, false);
> > > + return vmf_insert_pfn_pmd(vmf, pfn, vmf->flags &
> > > + FAULT_FLAG_WRITE);
> > > }
> > > #endif
> > >
> > > @@ -597,8 +596,12 @@ static vm_fault_t drm_gem_shmem_fault(struct
> > > vm_fault *vmf)
> > >
> > > pfn = page_to_pfn(page);
> > >
> > > - if (folio_test_pmd_mappable(folio))
> > > + if (folio_test_pmd_mappable(folio)) {
> > > ret = drm_gem_shmem_try_insert_pfn_pmd(vmf, pfn);
> > > + if (ret == VM_FAULT_NOPAGE && vmf->flags & FAULT_FLAG_WRITE)
> > > + folio_mark_dirty(folio);
> > > + }
> > > +
> > > if (ret != VM_FAULT_NOPAGE)
> > > ret = vmf_insert_pfn(vma, vmf->address, pfn);
> > >
>
> Any patch available to fix the weston issue? Still the issue is present with drm-misc-next
> Please let me know.
Not yet. I want to land [1] first. I'll try to post an official version
of the proposed fix tomorrow, though I'd really like to have some
inputs from the MM maintainers before claiming this is the right fix...
[1]https://yhbt.net/lore/dri-devel/20260319015224.46896-1-pedrodemargomes@gmail.com/
^ permalink raw reply [flat|nested] 35+ messages in thread
* RE: [PATCH v4 5/6] drm/gem-shmem: Track folio accessed/dirty status in mmap
2026-03-19 14:50 ` Boris Brezillon
@ 2026-03-19 14:53 ` Biju Das
0 siblings, 0 replies; 35+ messages in thread
From: Biju Das @ 2026-03-19 14:53 UTC (permalink / raw)
To: Boris Brezillon
Cc: Thomas Zimmermann, Tommaso Merciai, loic.molinari@collabora.com,
willy@infradead.org, frank.binns@imgtec.com,
matt.coster@imgtec.com, maarten.lankhorst@linux.intel.com,
mripard@kernel.org, airlied@gmail.com, simona@ffwll.ch,
linux-mm@kvack.org, dri-devel@lists.freedesktop.org
Hi Boris,
> -----Original Message-----
> From: dri-devel <dri-devel-bounces@lists.freedesktop.org> On Behalf Of Boris Brezillon
> Sent: 19 March 2026 14:50
> Subject: Re: [PATCH v4 5/6] drm/gem-shmem: Track folio accessed/dirty status in mmap
>
> On Thu, 19 Mar 2026 14:17:31 +0000
> Biju Das <biju.das.jz@bp.renesas.com> wrote:
>
> > Hi All,
> >
> > > -----Original Message-----
> > > From: Biju Das
> > > Sent: 14 March 2026 09:43
> > > Subject: RE: [PATCH v4 5/6] drm/gem-shmem: Track folio
> > > accessed/dirty status in mmap
> > >
> > > Hi Boris Brezillon,
> > >
> > > > -----Original Message-----
> > > > From: dri-devel <dri-devel-bounces@lists.freedesktop.org> On
> > > > Behalf Of Boris Brezillon
> > > > Sent: 13 March 2026 17:46
> > > > Subject: Re: [PATCH v4 5/6] drm/gem-shmem: Track folio
> > > > accessed/dirty status in mmap
> > > >
> > > > On Fri, 13 Mar 2026 13:55:21 +0100 Boris Brezillon
> > > > <boris.brezillon@collabora.com> wrote:
> > > >
> > > > > On Fri, 13 Mar 2026 13:43:28 +0100 Boris Brezillon
> > > > > <boris.brezillon@collabora.com> wrote:
> > > > >
> > > > > > On Fri, 13 Mar 2026 13:18:35 +0100 Boris Brezillon
> > > > > > <boris.brezillon@collabora.com> wrote:
> > > > > >
> > > > > > > On Fri, 13 Mar 2026 12:04:25 +0000 Biju Das
> > > > > > > <biju.das.jz@bp.renesas.com> wrote:
> > > > > > >
> > > > > > > > > -----Original Message-----
> > > > > > > > > From: dri-devel
> > > > > > > > > <dri-devel-bounces@lists.freedesktop.org> On Behalf Of
> > > > > > > > > Boris Brezillon
> > > > > > > > > Sent: 13 March 2026 11:57
> > > > > > > > > Subject: Re: [PATCH v4 5/6] drm/gem-shmem: Track folio
> > > > > > > > > accessed/dirty status in mmap
> > > > > > > > >
> > > > > > > > > On Fri, 13 Mar 2026 11:29:47 +0100 Thomas Zimmermann
> > > > > > > > > <tzimmermann@suse.de> wrote:
> > > > > > > > >
> > > > > > > > > > Hi
> > > > > > > > > >
> > > > > > > > > > Am 13.03.26 um 11:18 schrieb Boris Brezillon:
> > > > > > > > > > [...]
> > > > > > > > > > >>>>> + if (drm_WARN_ON(obj->dev, !shmem->pages || page_offset >= num_pages))
> > > > > > > > > > >>>>> + return VM_FAULT_SIGBUS;
> > > > > > > > > > >>>>> +
> > > > > > > > > > >>>>> + file_update_time(vma->vm_file);
> > > > > > > > > > >>>>> +
> > > > > > > > > > >>>>> +
> > > > > > > > > > >>>>> +folio_mark_dirty(page_folio(shmem->pages[page_o
> > > > > > > > > > >>>>> +ffse
> > > > > > > > > > >>>>> +t]));
> > > > > > > > > > > Do we need a folio_mark_dirty_lock() here?
> > > > > > > > > >
> > > > > > > > > > There is a helper for that with some documentation.
> > > > > > > > > > [1]
> > > > > > > > >
> > > > > > > > > This [1] seems to solve the problem for me. Still unsure
> > > > > > > > > about the folio_mark_dirty_lock vs folio_mark_dirty though.
> > > > > > > > >
> > > > > > > > > [1]https://yhbt.net/lore/dri-devel/20260312155027.168260
> > > > > > > > > 6-1- pedrodemargomes@gmail.com/
> > > > > > > >
> > > > > > > > FYI, I used folio_mark_dirty_lock() still it does not solve the issue with weston hang.
> > > > > > >
> > > > > > > The patch I pointed to has nothing to do with
> > > > > > > folio_mark_dirty_lock(), It's a bug caused by huge page mapping changes.
> > > > > >
> > > > > > Scratch that. I had a bunch of other changes on top, and it
> > > > > > hangs again now that I dropped those.
> > > > >
> > > > > Seems like it's the combination of huge pages and mkwrite that's
> > > > > causing issues, if I disable huge pages, it doesn't hang...
> > > >
> > > > I managed to have it working with the following diff. I still need
> > > > to check why the "map-RO-split+RW- on-demand" approach doesn't
> > > > work (races between huge_fault and pfn_mkwrite?), but I think it's
> > > > okay to map the real thing writable on the first attempt anyway
> > > > (we're not trying to do CoW here, since we're always pointing to
> > > > the same page, it's just the permissions that change). Note that
> > > > there's still the race fixed by
> > > > https://yhbt.net/lore/dri-devel/20260312155027.1682606-1-pedrodema
> > > > rgom es@gmail.com/ in this diff, I just tried to keep the diffstat
> > > > minimal.
> > >
> > > I confirm with this diff, weston is now coming up on RZ/V2L SMARC EVK.
> > >
> > > I am just using drm-misc-next + this diff + arm64 defconfig + Poky
> > > (Yocto Project Reference Distro)
> > > 5.0.11 + Mesa
> > >
> > > Cheers,
> > > Biju
> > >
> > > >
> > > > --->8---
> > > > diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c
> > > > b/drivers/gpu/drm/drm_gem_shmem_helper.c
> > > > index 4500deef4127..4efdce5a60f0 100644
> > > > --- a/drivers/gpu/drm/drm_gem_shmem_helper.c
> > > > +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c
> > > > @@ -561,9 +561,8 @@ static vm_fault_t drm_gem_shmem_try_insert_pfn_pmd(struct vm_fault *vmf,
> unsigne
> > > > bool aligned = (vmf->address & ~PMD_MASK) == (paddr &
> > > > ~PMD_MASK);
> > > >
> > > > if (aligned && pmd_none(*vmf->pmd)) {
> > > > - /* Read-only mapping; split upon write fault */
> > > > pfn &= PMD_MASK >> PAGE_SHIFT;
> > > > - return vmf_insert_pfn_pmd(vmf, pfn, false);
> > > > + return vmf_insert_pfn_pmd(vmf, pfn, vmf->flags &
> > > > + FAULT_FLAG_WRITE);
> > > > }
> > > > #endif
> > > >
> > > > @@ -597,8 +596,12 @@ static vm_fault_t drm_gem_shmem_fault(struct
> > > > vm_fault *vmf)
> > > >
> > > > pfn = page_to_pfn(page);
> > > >
> > > > - if (folio_test_pmd_mappable(folio))
> > > > + if (folio_test_pmd_mappable(folio)) {
> > > > ret = drm_gem_shmem_try_insert_pfn_pmd(vmf, pfn);
> > > > + if (ret == VM_FAULT_NOPAGE && vmf->flags & FAULT_FLAG_WRITE)
> > > > + folio_mark_dirty(folio);
> > > > + }
> > > > +
> > > > if (ret != VM_FAULT_NOPAGE)
> > > > ret = vmf_insert_pfn(vma, vmf->address, pfn);
> > > >
> >
> > Any patch available to fix the weston issue? Still the issue is
> > present with drm-misc-next Please let me know.
>
> Not yet. I want to land [1] first. I'll try to post an official version of the proposed fix tomorrow,
> though I'd really like to have some inputs from the MM maintainers before claiming this is the right
> fix...
>
> [1]https://yhbt.net/lore/dri-devel/20260319015224.46896-1-pedrodemargomes@gmail.com/
Thanks for the update.
Cheers,
Biju
^ permalink raw reply [flat|nested] 35+ messages in thread
end of thread, other threads:[~2026-03-19 14:53 UTC | newest]
Thread overview: 35+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-02-27 11:42 [PATCH v4 0/6] drm/gem-shmem: Track page accessed/dirty status Thomas Zimmermann
2026-02-27 11:42 ` [PATCH v4 1/6] drm/gem-shmem: Use obj directly where appropriate in fault handler Thomas Zimmermann
2026-02-27 11:42 ` [PATCH v4 2/6] drm/gem-shmem: Test for existence of page in mmap " Thomas Zimmermann
2026-02-27 11:42 ` [PATCH v4 3/6] drm/gem-shmem: Return vm_fault_t from drm_gem_shmem_try_map_pmd() Thomas Zimmermann
2026-02-27 11:42 ` [PATCH v4 4/6] drm/gem-shmem: Refactor drm_gem_shmem_try_map_pmd() Thomas Zimmermann
2026-02-27 11:42 ` [PATCH v4 5/6] drm/gem-shmem: Track folio accessed/dirty status in mmap Thomas Zimmermann
2026-03-12 17:36 ` Tommaso Merciai
2026-03-12 17:46 ` Biju Das
2026-03-13 6:44 ` Biju Das
2026-03-13 8:00 ` Thomas Zimmermann
2026-03-13 8:41 ` Biju Das
2026-03-13 10:03 ` Biju Das
2026-03-13 10:18 ` Boris Brezillon
2026-03-13 10:29 ` Thomas Zimmermann
2026-03-13 10:33 ` Biju Das
2026-03-13 10:52 ` Boris Brezillon
2026-03-13 11:56 ` Boris Brezillon
2026-03-13 12:04 ` Biju Das
2026-03-13 12:18 ` Boris Brezillon
2026-03-13 12:43 ` Boris Brezillon
2026-03-13 12:55 ` Boris Brezillon
2026-03-13 17:45 ` Boris Brezillon
2026-03-14 9:42 ` Biju Das
2026-03-19 14:17 ` Biju Das
2026-03-19 14:50 ` Boris Brezillon
2026-03-19 14:53 ` Biju Das
2026-03-16 8:45 ` Thomas Zimmermann
2026-03-16 9:36 ` Boris Brezillon
2026-03-16 10:22 ` Thomas Zimmermann
2026-03-16 10:53 ` Boris Brezillon
2026-03-16 15:30 ` Boris Brezillon
2026-03-13 8:33 ` Thomas Zimmermann
2026-03-13 8:47 ` Biju Das
2026-03-13 9:24 ` Tommaso Merciai
2026-02-27 11:42 ` [PATCH v4 6/6] drm/gem-shmem: Track folio accessed/dirty status in vmap Thomas Zimmermann
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox