public inbox for linux-mm@kvack.org
 help / color / mirror / Atom feed
* [PATCH] drm/shmem_helper: Make sure PMD entries get the writeable upgrade
@ 2026-03-20 15:19 Boris Brezillon
  2026-03-20 16:38 ` Tommaso Merciai
                   ` (3 more replies)
  0 siblings, 4 replies; 6+ messages in thread
From: Boris Brezillon @ 2026-03-20 15:19 UTC (permalink / raw)
  To: Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, dri-devel
  Cc: David Airlie, Simona Vetter, linux-kernel, Andrew Morton,
	David Hildenbrand, Lorenzo Stoakes, Zi Yan, Baolin Wang,
	Liam R. Howlett, Nico Pache, Ryan Roberts, Dev Jain, Barry Song,
	Lance Yang, linux-mm, Boris Brezillon, kernel, Biju Das,
	Tommaso Merciai

Unlike PTEs which are automatically upgraded to writeable entries if
.pfn_mkwrite() returns 0, the PMD upgrades go through .huge_fault(),
and we currently pretend to have handled the make-writeable request
even though we only ever map things read-only. Make sure we pass the
proper "write" info to vmf_insert_pfn_pmd() in that case.

This also means we have to record the mkwrite event in the .huge_fault()
path now. Move the dirty tracking logic to a
drm_gem_shmem_record_mkwrite() helper so it can also be called from
drm_gem_shmem_pfn_mkwrite().

Note that this wasn't a problem before commit 28e3918179aa
("drm/gem-shmem: Track folio accessed/dirty status in mmap"), because
the pgprot were not lowered to read-only before this commit (see the
vma_wants_writenotify() in vma_set_page_prot()).

Fixes: 28e3918179aa ("drm/gem-shmem: Track folio accessed/dirty status in mmap")
Signed-off-by: Boris Brezillon <boris.brezillon@collabora.com>
Cc: Biju Das <biju.das.jz@bp.renesas.com>
Cc: Thomas Zimmermann <tzimmermann@suse.de>
Cc: Tommaso Merciai <tommaso.merciai.xr@bp.renesas.com>
---

This patch is based on drm-tip [2], because that's the only branch
that has both [1] and the dirty tracking changes that live in
drm-misc-next.

Also added the THP maintainers in Cc, so I can hopefully get some
feedback on the fix. For instance, I'm still unsure
drm_gem_shmem_pfn_mkwrite() is race-free (do we need some locking
there? should we call folio_mark_dirty_lock()? should we call the
fault handler directly from there and have all the dirty tracking
in this .[huge_]fault path?).

[1]https://yhbt.net/lore/dri-devel/20260319015224.46896-1-pedrodemargomes@gmail.com/
[2]https://gitlab.freedesktop.org/drm/tip
---
 drivers/gpu/drm/drm_gem_shmem_helper.c | 46 ++++++++++++++++++--------
 1 file changed, 32 insertions(+), 14 deletions(-)

diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c
index 2062ca607833..545933c7f712 100644
--- a/drivers/gpu/drm/drm_gem_shmem_helper.c
+++ b/drivers/gpu/drm/drm_gem_shmem_helper.c
@@ -554,6 +554,21 @@ int drm_gem_shmem_dumb_create(struct drm_file *file, struct drm_device *dev,
 }
 EXPORT_SYMBOL_GPL(drm_gem_shmem_dumb_create);
 
+static void drm_gem_shmem_record_mkwrite(struct vm_fault *vmf)
+{
+	struct vm_area_struct *vma = vmf->vma;
+	struct drm_gem_object *obj = vma->vm_private_data;
+	struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj);
+	loff_t num_pages = obj->size >> PAGE_SHIFT;
+	pgoff_t page_offset = vmf->pgoff - vma->vm_pgoff; /* page offset within VMA */
+
+	if (drm_WARN_ON(obj->dev, !shmem->pages || page_offset >= num_pages))
+		return;
+
+	file_update_time(vma->vm_file);
+	folio_mark_dirty(page_folio(shmem->pages[page_offset]));
+}
+
 static vm_fault_t try_insert_pfn(struct vm_fault *vmf, unsigned int order,
 				 unsigned long pfn)
 {
@@ -566,8 +581,23 @@ static vm_fault_t try_insert_pfn(struct vm_fault *vmf, unsigned int order,
 
 		if (aligned &&
 		    folio_test_pmd_mappable(page_folio(pfn_to_page(pfn)))) {
+			vm_fault_t ret;
+
 			pfn &= PMD_MASK >> PAGE_SHIFT;
-			return vmf_insert_pfn_pmd(vmf, pfn, false);
+
+			/* Unlike PTEs which are automatically upgraded to
+			 * writeable entries, the PMD upgrades go through
+			 * .huge_fault(). Make sure we pass the "write" info
+			 * along in that case.
+			 * This also means we have to record the write fault
+			 * here, instead of in .pfn_mkwrite().
+			 */
+			ret = vmf_insert_pfn_pmd(vmf, pfn,
+						 vmf->flags & FAULT_FLAG_WRITE);
+			if (ret == VM_FAULT_NOPAGE && (vmf->flags & FAULT_FLAG_WRITE))
+				drm_gem_shmem_record_mkwrite(vmf);
+
+			return ret;
 		}
 #endif
 	}
@@ -655,19 +685,7 @@ static void drm_gem_shmem_vm_close(struct vm_area_struct *vma)
 
 static vm_fault_t drm_gem_shmem_pfn_mkwrite(struct vm_fault *vmf)
 {
-	struct vm_area_struct *vma = vmf->vma;
-	struct drm_gem_object *obj = vma->vm_private_data;
-	struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj);
-	loff_t num_pages = obj->size >> PAGE_SHIFT;
-	pgoff_t page_offset = vmf->pgoff - vma->vm_pgoff; /* page offset within VMA */
-
-	if (drm_WARN_ON(obj->dev, !shmem->pages || page_offset >= num_pages))
-		return VM_FAULT_SIGBUS;
-
-	file_update_time(vma->vm_file);
-
-	folio_mark_dirty(page_folio(shmem->pages[page_offset]));
-
+	drm_gem_shmem_record_mkwrite(vmf);
 	return 0;
 }
 
-- 
2.53.0



^ permalink raw reply related	[flat|nested] 6+ messages in thread

* Re: [PATCH] drm/shmem_helper: Make sure PMD entries get the writeable upgrade
  2026-03-20 15:19 [PATCH] drm/shmem_helper: Make sure PMD entries get the writeable upgrade Boris Brezillon
@ 2026-03-20 16:38 ` Tommaso Merciai
  2026-03-30  8:06 ` Thomas Zimmermann
                   ` (2 subsequent siblings)
  3 siblings, 0 replies; 6+ messages in thread
From: Tommaso Merciai @ 2026-03-20 16:38 UTC (permalink / raw)
  To: Boris Brezillon
  Cc: Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, dri-devel,
	David Airlie, Simona Vetter, linux-kernel, Andrew Morton,
	David Hildenbrand, Lorenzo Stoakes, Zi Yan, Baolin Wang,
	Liam R. Howlett, Nico Pache, Ryan Roberts, Dev Jain, Barry Song,
	Lance Yang, linux-mm, kernel, Biju Das

Hi Boris,
Thanks for your patch.

On Fri, Mar 20, 2026 at 04:19:13PM +0100, Boris Brezillon wrote:
> Unlike PTEs which are automatically upgraded to writeable entries if
> .pfn_mkwrite() returns 0, the PMD upgrades go through .huge_fault(),
> and we currently pretend to have handled the make-writeable request
> even though we only ever map things read-only. Make sure we pass the
> proper "write" info to vmf_insert_pfn_pmd() in that case.
> 
> This also means we have to record the mkwrite event in the .huge_fault()
> path now. Move the dirty tracking logic to a
> drm_gem_shmem_record_mkwrite() helper so it can also be called from
> drm_gem_shmem_pfn_mkwrite().
> 
> Note that this wasn't a problem before commit 28e3918179aa
> ("drm/gem-shmem: Track folio accessed/dirty status in mmap"), because
> the pgprot were not lowered to read-only before this commit (see the
> vma_wants_writenotify() in vma_set_page_prot()).
> 
> Fixes: 28e3918179aa ("drm/gem-shmem: Track folio accessed/dirty status in mmap")
> Signed-off-by: Boris Brezillon <boris.brezillon@collabora.com>
> Cc: Biju Das <biju.das.jz@bp.renesas.com>
> Cc: Thomas Zimmermann <tzimmermann@suse.de>
> Cc: Tommaso Merciai <tommaso.merciai.xr@bp.renesas.com>
> ---
> 
> This patch is based on drm-tip [2], because that's the only branch
> that has both [1] and the dirty tracking changes that live in
> drm-misc-next.

Tested on RZ/G3E, this fix the issue on my side.
Thanks for your work.

Tested-by: Tommaso Merciai <tommaso.merciai.xr@bp.renesas.com>

Kind Regards,
Tommaso

> 
> Also added the THP maintainers in Cc, so I can hopefully get some
> feedback on the fix. For instance, I'm still unsure
> drm_gem_shmem_pfn_mkwrite() is race-free (do we need some locking
> there? should we call folio_mark_dirty_lock()? should we call the
> fault handler directly from there and have all the dirty tracking
> in this .[huge_]fault path?).
> 
> [1]https://yhbt.net/lore/dri-devel/20260319015224.46896-1-pedrodemargomes@gmail.com/
> [2]https://gitlab.freedesktop.org/drm/tip
> ---
>  drivers/gpu/drm/drm_gem_shmem_helper.c | 46 ++++++++++++++++++--------
>  1 file changed, 32 insertions(+), 14 deletions(-)
> 
> diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c
> index 2062ca607833..545933c7f712 100644
> --- a/drivers/gpu/drm/drm_gem_shmem_helper.c
> +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c
> @@ -554,6 +554,21 @@ int drm_gem_shmem_dumb_create(struct drm_file *file, struct drm_device *dev,
>  }
>  EXPORT_SYMBOL_GPL(drm_gem_shmem_dumb_create);
>  
> +static void drm_gem_shmem_record_mkwrite(struct vm_fault *vmf)
> +{
> +	struct vm_area_struct *vma = vmf->vma;
> +	struct drm_gem_object *obj = vma->vm_private_data;
> +	struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj);
> +	loff_t num_pages = obj->size >> PAGE_SHIFT;
> +	pgoff_t page_offset = vmf->pgoff - vma->vm_pgoff; /* page offset within VMA */
> +
> +	if (drm_WARN_ON(obj->dev, !shmem->pages || page_offset >= num_pages))
> +		return;
> +
> +	file_update_time(vma->vm_file);
> +	folio_mark_dirty(page_folio(shmem->pages[page_offset]));
> +}
> +
>  static vm_fault_t try_insert_pfn(struct vm_fault *vmf, unsigned int order,
>  				 unsigned long pfn)
>  {
> @@ -566,8 +581,23 @@ static vm_fault_t try_insert_pfn(struct vm_fault *vmf, unsigned int order,
>  
>  		if (aligned &&
>  		    folio_test_pmd_mappable(page_folio(pfn_to_page(pfn)))) {
> +			vm_fault_t ret;
> +
>  			pfn &= PMD_MASK >> PAGE_SHIFT;
> -			return vmf_insert_pfn_pmd(vmf, pfn, false);
> +
> +			/* Unlike PTEs which are automatically upgraded to
> +			 * writeable entries, the PMD upgrades go through
> +			 * .huge_fault(). Make sure we pass the "write" info
> +			 * along in that case.
> +			 * This also means we have to record the write fault
> +			 * here, instead of in .pfn_mkwrite().
> +			 */
> +			ret = vmf_insert_pfn_pmd(vmf, pfn,
> +						 vmf->flags & FAULT_FLAG_WRITE);
> +			if (ret == VM_FAULT_NOPAGE && (vmf->flags & FAULT_FLAG_WRITE))
> +				drm_gem_shmem_record_mkwrite(vmf);
> +
> +			return ret;
>  		}
>  #endif
>  	}
> @@ -655,19 +685,7 @@ static void drm_gem_shmem_vm_close(struct vm_area_struct *vma)
>  
>  static vm_fault_t drm_gem_shmem_pfn_mkwrite(struct vm_fault *vmf)
>  {
> -	struct vm_area_struct *vma = vmf->vma;
> -	struct drm_gem_object *obj = vma->vm_private_data;
> -	struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj);
> -	loff_t num_pages = obj->size >> PAGE_SHIFT;
> -	pgoff_t page_offset = vmf->pgoff - vma->vm_pgoff; /* page offset within VMA */
> -
> -	if (drm_WARN_ON(obj->dev, !shmem->pages || page_offset >= num_pages))
> -		return VM_FAULT_SIGBUS;
> -
> -	file_update_time(vma->vm_file);
> -
> -	folio_mark_dirty(page_folio(shmem->pages[page_offset]));
> -
> +	drm_gem_shmem_record_mkwrite(vmf);
>  	return 0;
>  }
>  
> -- 
> 2.53.0
> 


^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH] drm/shmem_helper: Make sure PMD entries get the writeable upgrade
  2026-03-20 15:19 [PATCH] drm/shmem_helper: Make sure PMD entries get the writeable upgrade Boris Brezillon
  2026-03-20 16:38 ` Tommaso Merciai
@ 2026-03-30  8:06 ` Thomas Zimmermann
  2026-03-31 15:33 ` Biju Das
  2026-04-03  7:57 ` Loïc Molinari
  3 siblings, 0 replies; 6+ messages in thread
From: Thomas Zimmermann @ 2026-03-30  8:06 UTC (permalink / raw)
  To: Boris Brezillon, Maarten Lankhorst, Maxime Ripard, dri-devel
  Cc: David Airlie, Simona Vetter, linux-kernel, Andrew Morton,
	David Hildenbrand, Lorenzo Stoakes, Zi Yan, Baolin Wang,
	Liam R. Howlett, Nico Pache, Ryan Roberts, Dev Jain, Barry Song,
	Lance Yang, linux-mm, kernel, Biju Das, Tommaso Merciai



Am 20.03.26 um 16:19 schrieb Boris Brezillon:
> Unlike PTEs which are automatically upgraded to writeable entries if
> .pfn_mkwrite() returns 0, the PMD upgrades go through .huge_fault(),
> and we currently pretend to have handled the make-writeable request
> even though we only ever map things read-only. Make sure we pass the
> proper "write" info to vmf_insert_pfn_pmd() in that case.
>
> This also means we have to record the mkwrite event in the .huge_fault()
> path now. Move the dirty tracking logic to a
> drm_gem_shmem_record_mkwrite() helper so it can also be called from
> drm_gem_shmem_pfn_mkwrite().
>
> Note that this wasn't a problem before commit 28e3918179aa
> ("drm/gem-shmem: Track folio accessed/dirty status in mmap"), because
> the pgprot were not lowered to read-only before this commit (see the
> vma_wants_writenotify() in vma_set_page_prot()).
>
> Fixes: 28e3918179aa ("drm/gem-shmem: Track folio accessed/dirty status in mmap")
> Signed-off-by: Boris Brezillon <boris.brezillon@collabora.com>
> Cc: Biju Das <biju.das.jz@bp.renesas.com>
> Cc: Thomas Zimmermann <tzimmermann@suse.de>
> Cc: Tommaso Merciai <tommaso.merciai.xr@bp.renesas.com>

Acked-by: Thomas Zimmermann <tzimmermann@suse.de>

> ---
>
> This patch is based on drm-tip [2], because that's the only branch
> that has both [1] and the dirty tracking changes that live in
> drm-misc-next.
>
> Also added the THP maintainers in Cc, so I can hopefully get some
> feedback on the fix. For instance, I'm still unsure
> drm_gem_shmem_pfn_mkwrite() is race-free (do we need some locking
> there? should we call folio_mark_dirty_lock()? should we call the
> fault handler directly from there and have all the dirty tracking
> in this .[huge_]fault path?).
>
> [1]https://yhbt.net/lore/dri-devel/20260319015224.46896-1-pedrodemargomes@gmail.com/
> [2]https://gitlab.freedesktop.org/drm/tip
> ---
>   drivers/gpu/drm/drm_gem_shmem_helper.c | 46 ++++++++++++++++++--------
>   1 file changed, 32 insertions(+), 14 deletions(-)
>
> diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c
> index 2062ca607833..545933c7f712 100644
> --- a/drivers/gpu/drm/drm_gem_shmem_helper.c
> +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c
> @@ -554,6 +554,21 @@ int drm_gem_shmem_dumb_create(struct drm_file *file, struct drm_device *dev,
>   }
>   EXPORT_SYMBOL_GPL(drm_gem_shmem_dumb_create);
>   
> +static void drm_gem_shmem_record_mkwrite(struct vm_fault *vmf)
> +{
> +	struct vm_area_struct *vma = vmf->vma;
> +	struct drm_gem_object *obj = vma->vm_private_data;
> +	struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj);
> +	loff_t num_pages = obj->size >> PAGE_SHIFT;
> +	pgoff_t page_offset = vmf->pgoff - vma->vm_pgoff; /* page offset within VMA */
> +
> +	if (drm_WARN_ON(obj->dev, !shmem->pages || page_offset >= num_pages))
> +		return;
> +
> +	file_update_time(vma->vm_file);
> +	folio_mark_dirty(page_folio(shmem->pages[page_offset]));
> +}
> +
>   static vm_fault_t try_insert_pfn(struct vm_fault *vmf, unsigned int order,
>   				 unsigned long pfn)
>   {
> @@ -566,8 +581,23 @@ static vm_fault_t try_insert_pfn(struct vm_fault *vmf, unsigned int order,
>   
>   		if (aligned &&
>   		    folio_test_pmd_mappable(page_folio(pfn_to_page(pfn)))) {
> +			vm_fault_t ret;
> +
>   			pfn &= PMD_MASK >> PAGE_SHIFT;
> -			return vmf_insert_pfn_pmd(vmf, pfn, false);
> +
> +			/* Unlike PTEs which are automatically upgraded to
> +			 * writeable entries, the PMD upgrades go through
> +			 * .huge_fault(). Make sure we pass the "write" info
> +			 * along in that case.
> +			 * This also means we have to record the write fault
> +			 * here, instead of in .pfn_mkwrite().
> +			 */
> +			ret = vmf_insert_pfn_pmd(vmf, pfn,
> +						 vmf->flags & FAULT_FLAG_WRITE);
> +			if (ret == VM_FAULT_NOPAGE && (vmf->flags & FAULT_FLAG_WRITE))
> +				drm_gem_shmem_record_mkwrite(vmf);
> +
> +			return ret;
>   		}
>   #endif
>   	}
> @@ -655,19 +685,7 @@ static void drm_gem_shmem_vm_close(struct vm_area_struct *vma)
>   
>   static vm_fault_t drm_gem_shmem_pfn_mkwrite(struct vm_fault *vmf)
>   {
> -	struct vm_area_struct *vma = vmf->vma;
> -	struct drm_gem_object *obj = vma->vm_private_data;
> -	struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj);
> -	loff_t num_pages = obj->size >> PAGE_SHIFT;
> -	pgoff_t page_offset = vmf->pgoff - vma->vm_pgoff; /* page offset within VMA */
> -
> -	if (drm_WARN_ON(obj->dev, !shmem->pages || page_offset >= num_pages))
> -		return VM_FAULT_SIGBUS;
> -
> -	file_update_time(vma->vm_file);
> -
> -	folio_mark_dirty(page_folio(shmem->pages[page_offset]));
> -
> +	drm_gem_shmem_record_mkwrite(vmf);
>   	return 0;
>   }
>   

-- 
--
Thomas Zimmermann
Graphics Driver Developer
SUSE Software Solutions Germany GmbH
Frankenstr. 146, 90461 Nürnberg, Germany, www.suse.com
GF: Jochen Jaser, Andrew McDonald, Werner Knoblich, (HRB 36809, AG Nürnberg)




^ permalink raw reply	[flat|nested] 6+ messages in thread

* RE: [PATCH] drm/shmem_helper: Make sure PMD entries get the writeable upgrade
  2026-03-20 15:19 [PATCH] drm/shmem_helper: Make sure PMD entries get the writeable upgrade Boris Brezillon
  2026-03-20 16:38 ` Tommaso Merciai
  2026-03-30  8:06 ` Thomas Zimmermann
@ 2026-03-31 15:33 ` Biju Das
  2026-04-03  7:57 ` Loïc Molinari
  3 siblings, 0 replies; 6+ messages in thread
From: Biju Das @ 2026-03-31 15:33 UTC (permalink / raw)
  To: Boris Brezillon, Maarten Lankhorst, Maxime Ripard,
	Thomas Zimmermann, dri-devel@lists.freedesktop.org
  Cc: David Airlie, Simona Vetter, linux-kernel@vger.kernel.org,
	Andrew Morton, David Hildenbrand, Lorenzo Stoakes, Zi Yan,
	Baolin Wang, Liam R. Howlett, Nico Pache, Ryan Roberts, Dev Jain,
	Barry Song, Lance Yang, linux-mm@kvack.org, kernel@collabora.com,
	Tommaso Merciai

Hi Boris Brezillon,

> -----Original Message-----
> From: Boris Brezillon <boris.brezillon@collabora.com>
> Sent: 20 March 2026 15:19
> Subject: [PATCH] drm/shmem_helper: Make sure PMD entries get the writeable upgrade
> 
> Unlike PTEs which are automatically upgraded to writeable entries if
> .pfn_mkwrite() returns 0, the PMD upgrades go through .huge_fault(), and we currently pretend to have
> handled the make-writeable request even though we only ever map things read-only. Make sure we pass
> the proper "write" info to vmf_insert_pfn_pmd() in that case.
> 
> This also means we have to record the mkwrite event in the .huge_fault() path now. Move the dirty
> tracking logic to a
> drm_gem_shmem_record_mkwrite() helper so it can also be called from drm_gem_shmem_pfn_mkwrite().
> 
> Note that this wasn't a problem before commit 28e3918179aa
> ("drm/gem-shmem: Track folio accessed/dirty status in mmap"), because the pgprot were not lowered to
> read-only before this commit (see the
> vma_wants_writenotify() in vma_set_page_prot()).
> 
> Fixes: 28e3918179aa ("drm/gem-shmem: Track folio accessed/dirty status in mmap")
> Signed-off-by: Boris Brezillon <boris.brezillon@collabora.com>
> Cc: Biju Das <biju.das.jz@bp.renesas.com>
> Cc: Thomas Zimmermann <tzimmermann@suse.de>
> Cc: Tommaso Merciai <tommaso.merciai.xr@bp.renesas.com>


Thanks, weston is now back on RZ/G3L SMARC EVK with mesa 24.0.7 Panfrost

Tested-by: Biju Das <biju.das.jz@bp.renesas.com>

Cheers,
Biju


> ---
> 
> This patch is based on drm-tip [2], because that's the only branch that has both [1] and the dirty
> tracking changes that live in drm-misc-next.
> 
> Also added the THP maintainers in Cc, so I can hopefully get some feedback on the fix. For instance,
> I'm still unsure
> drm_gem_shmem_pfn_mkwrite() is race-free (do we need some locking there? should we call
> folio_mark_dirty_lock()? should we call the fault handler directly from there and have all the dirty
> tracking in this .[huge_]fault path?).
> 
> [1]https://yhbt.net/lore/dri-devel/20260319015224.46896-1-pedrodemargomes@gmail.com/
> [2]https://gitlab.freedesktop.org/drm/tip
> ---
>  drivers/gpu/drm/drm_gem_shmem_helper.c | 46 ++++++++++++++++++--------
>  1 file changed, 32 insertions(+), 14 deletions(-)
> 
> diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c
> index 2062ca607833..545933c7f712 100644
> --- a/drivers/gpu/drm/drm_gem_shmem_helper.c
> +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c
> @@ -554,6 +554,21 @@ int drm_gem_shmem_dumb_create(struct drm_file *file, struct drm_device *dev,  }
> EXPORT_SYMBOL_GPL(drm_gem_shmem_dumb_create);
> 
> +static void drm_gem_shmem_record_mkwrite(struct vm_fault *vmf) {
> +	struct vm_area_struct *vma = vmf->vma;
> +	struct drm_gem_object *obj = vma->vm_private_data;
> +	struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj);
> +	loff_t num_pages = obj->size >> PAGE_SHIFT;
> +	pgoff_t page_offset = vmf->pgoff - vma->vm_pgoff; /* page offset
> +within VMA */
> +
> +	if (drm_WARN_ON(obj->dev, !shmem->pages || page_offset >= num_pages))
> +		return;
> +
> +	file_update_time(vma->vm_file);
> +	folio_mark_dirty(page_folio(shmem->pages[page_offset]));
> +}
> +
>  static vm_fault_t try_insert_pfn(struct vm_fault *vmf, unsigned int order,
>  				 unsigned long pfn)
>  {
> @@ -566,8 +581,23 @@ static vm_fault_t try_insert_pfn(struct vm_fault *vmf, unsigned int order,
> 
>  		if (aligned &&
>  		    folio_test_pmd_mappable(page_folio(pfn_to_page(pfn)))) {
> +			vm_fault_t ret;
> +
>  			pfn &= PMD_MASK >> PAGE_SHIFT;
> -			return vmf_insert_pfn_pmd(vmf, pfn, false);
> +
> +			/* Unlike PTEs which are automatically upgraded to
> +			 * writeable entries, the PMD upgrades go through
> +			 * .huge_fault(). Make sure we pass the "write" info
> +			 * along in that case.
> +			 * This also means we have to record the write fault
> +			 * here, instead of in .pfn_mkwrite().
> +			 */
> +			ret = vmf_insert_pfn_pmd(vmf, pfn,
> +						 vmf->flags & FAULT_FLAG_WRITE);
> +			if (ret == VM_FAULT_NOPAGE && (vmf->flags & FAULT_FLAG_WRITE))
> +				drm_gem_shmem_record_mkwrite(vmf);
> +
> +			return ret;
>  		}
>  #endif
>  	}
> @@ -655,19 +685,7 @@ static void drm_gem_shmem_vm_close(struct vm_area_struct *vma)
> 
>  static vm_fault_t drm_gem_shmem_pfn_mkwrite(struct vm_fault *vmf)  {
> -	struct vm_area_struct *vma = vmf->vma;
> -	struct drm_gem_object *obj = vma->vm_private_data;
> -	struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj);
> -	loff_t num_pages = obj->size >> PAGE_SHIFT;
> -	pgoff_t page_offset = vmf->pgoff - vma->vm_pgoff; /* page offset within VMA */
> -
> -	if (drm_WARN_ON(obj->dev, !shmem->pages || page_offset >= num_pages))
> -		return VM_FAULT_SIGBUS;
> -
> -	file_update_time(vma->vm_file);
> -
> -	folio_mark_dirty(page_folio(shmem->pages[page_offset]));
> -
> +	drm_gem_shmem_record_mkwrite(vmf);
>  	return 0;
>  }
> 
> --
> 2.53.0



^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH] drm/shmem_helper: Make sure PMD entries get the writeable upgrade
  2026-03-20 15:19 [PATCH] drm/shmem_helper: Make sure PMD entries get the writeable upgrade Boris Brezillon
                   ` (2 preceding siblings ...)
  2026-03-31 15:33 ` Biju Das
@ 2026-04-03  7:57 ` Loïc Molinari
  2026-04-03  8:25   ` Boris Brezillon
  3 siblings, 1 reply; 6+ messages in thread
From: Loïc Molinari @ 2026-04-03  7:57 UTC (permalink / raw)
  To: Boris Brezillon, Maarten Lankhorst, Maxime Ripard,
	Thomas Zimmermann, dri-devel
  Cc: David Airlie, Simona Vetter, linux-kernel, Andrew Morton,
	David Hildenbrand, Lorenzo Stoakes, Zi Yan, Baolin Wang,
	Liam R. Howlett, Nico Pache, Ryan Roberts, Dev Jain, Barry Song,
	Lance Yang, linux-mm, kernel, Biju Das, Tommaso Merciai

Hi Boris,

On 20/03/2026 16:19, Boris Brezillon wrote:
> Unlike PTEs which are automatically upgraded to writeable entries if
> .pfn_mkwrite() returns 0, the PMD upgrades go through .huge_fault(),
> and we currently pretend to have handled the make-writeable request
> even though we only ever map things read-only. Make sure we pass the
> proper "write" info to vmf_insert_pfn_pmd() in that case.
> 
> This also means we have to record the mkwrite event in the .huge_fault()
> path now. Move the dirty tracking logic to a
> drm_gem_shmem_record_mkwrite() helper so it can also be called from
> drm_gem_shmem_pfn_mkwrite().
> 
> Note that this wasn't a problem before commit 28e3918179aa
> ("drm/gem-shmem: Track folio accessed/dirty status in mmap"), because
> the pgprot were not lowered to read-only before this commit (see the
> vma_wants_writenotify() in vma_set_page_prot()).
> 
> Fixes: 28e3918179aa ("drm/gem-shmem: Track folio accessed/dirty status in mmap")
> Signed-off-by: Boris Brezillon <boris.brezillon@collabora.com>
> Cc: Biju Das <biju.das.jz@bp.renesas.com>
> Cc: Thomas Zimmermann <tzimmermann@suse.de>
> Cc: Tommaso Merciai <tommaso.merciai.xr@bp.renesas.com>
> ---
> 
> This patch is based on drm-tip [2], because that's the only branch
> that has both [1] and the dirty tracking changes that live in
> drm-misc-next.
> 
> Also added the THP maintainers in Cc, so I can hopefully get some
> feedback on the fix. For instance, I'm still unsure
> drm_gem_shmem_pfn_mkwrite() is race-free (do we need some locking
> there? should we call folio_mark_dirty_lock()? should we call the
> fault handler directly from there and have all the dirty tracking
> in this .[huge_]fault path?).
> 
> [1]https://yhbt.net/lore/dri-devel/20260319015224.46896-1-pedrodemargomes@gmail.com/
> [2]https://gitlab.freedesktop.org/drm/tip
> ---
>   drivers/gpu/drm/drm_gem_shmem_helper.c | 46 ++++++++++++++++++--------
>   1 file changed, 32 insertions(+), 14 deletions(-)
> 
> diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c
> index 2062ca607833..545933c7f712 100644
> --- a/drivers/gpu/drm/drm_gem_shmem_helper.c
> +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c
> @@ -554,6 +554,21 @@ int drm_gem_shmem_dumb_create(struct drm_file *file, struct drm_device *dev,
>   }
>   EXPORT_SYMBOL_GPL(drm_gem_shmem_dumb_create);
>   
> +static void drm_gem_shmem_record_mkwrite(struct vm_fault *vmf)
> +{
> +	struct vm_area_struct *vma = vmf->vma;
> +	struct drm_gem_object *obj = vma->vm_private_data;
> +	struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj);
> +	loff_t num_pages = obj->size >> PAGE_SHIFT;
> +	pgoff_t page_offset = vmf->pgoff - vma->vm_pgoff; /* page offset within VMA */
> +
> +	if (drm_WARN_ON(obj->dev, !shmem->pages || page_offset >= num_pages))
> +		return;
> +
> +	file_update_time(vma->vm_file);
> +	folio_mark_dirty(page_folio(shmem->pages[page_offset]));

Unless we're sure the folio can't be truncated by another CPU, maybe we 
should use folio_mark_dirty_lock() here. This is what's done for pages 
(not PFNs) in mm/memory.c. Let's wait and see how it goes without 
locking for now.

Reviewed-by: Loïc Molinari <loic.molinari@collabora.com>

Regards,
Loïc

> +}
> +
>   static vm_fault_t try_insert_pfn(struct vm_fault *vmf, unsigned int order,
>   				 unsigned long pfn)
>   {
> @@ -566,8 +581,23 @@ static vm_fault_t try_insert_pfn(struct vm_fault *vmf, unsigned int order,
>   
>   		if (aligned &&
>   		    folio_test_pmd_mappable(page_folio(pfn_to_page(pfn)))) {
> +			vm_fault_t ret;
> +
>   			pfn &= PMD_MASK >> PAGE_SHIFT;
> -			return vmf_insert_pfn_pmd(vmf, pfn, false);
> +
> +			/* Unlike PTEs which are automatically upgraded to
> +			 * writeable entries, the PMD upgrades go through
> +			 * .huge_fault(). Make sure we pass the "write" info
> +			 * along in that case.
> +			 * This also means we have to record the write fault
> +			 * here, instead of in .pfn_mkwrite().
> +			 */
> +			ret = vmf_insert_pfn_pmd(vmf, pfn,
> +						 vmf->flags & FAULT_FLAG_WRITE);
> +			if (ret == VM_FAULT_NOPAGE && (vmf->flags & FAULT_FLAG_WRITE))
> +				drm_gem_shmem_record_mkwrite(vmf);
> +
> +			return ret;
>   		}
>   #endif
>   	}
> @@ -655,19 +685,7 @@ static void drm_gem_shmem_vm_close(struct vm_area_struct *vma)
>   
>   static vm_fault_t drm_gem_shmem_pfn_mkwrite(struct vm_fault *vmf)
>   {
> -	struct vm_area_struct *vma = vmf->vma;
> -	struct drm_gem_object *obj = vma->vm_private_data;
> -	struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj);
> -	loff_t num_pages = obj->size >> PAGE_SHIFT;
> -	pgoff_t page_offset = vmf->pgoff - vma->vm_pgoff; /* page offset within VMA */
> -
> -	if (drm_WARN_ON(obj->dev, !shmem->pages || page_offset >= num_pages))
> -		return VM_FAULT_SIGBUS;
> -
> -	file_update_time(vma->vm_file);
> -
> -	folio_mark_dirty(page_folio(shmem->pages[page_offset]));
> -
> +	drm_gem_shmem_record_mkwrite(vmf);
>   	return 0;
>   }
>   



^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH] drm/shmem_helper: Make sure PMD entries get the writeable upgrade
  2026-04-03  7:57 ` Loïc Molinari
@ 2026-04-03  8:25   ` Boris Brezillon
  0 siblings, 0 replies; 6+ messages in thread
From: Boris Brezillon @ 2026-04-03  8:25 UTC (permalink / raw)
  To: Loïc Molinari
  Cc: Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, dri-devel,
	David Airlie, Simona Vetter, linux-kernel, Andrew Morton,
	David Hildenbrand, Lorenzo Stoakes, Zi Yan, Baolin Wang,
	Liam R. Howlett, Nico Pache, Ryan Roberts, Dev Jain, Barry Song,
	Lance Yang, linux-mm, kernel, Biju Das, Tommaso Merciai

On Fri, 3 Apr 2026 09:57:53 +0200
Loïc Molinari <loic.molinari@collabora.com> wrote:

> Hi Boris,
> 
> On 20/03/2026 16:19, Boris Brezillon wrote:
> > Unlike PTEs which are automatically upgraded to writeable entries if
> > .pfn_mkwrite() returns 0, the PMD upgrades go through .huge_fault(),
> > and we currently pretend to have handled the make-writeable request
> > even though we only ever map things read-only. Make sure we pass the
> > proper "write" info to vmf_insert_pfn_pmd() in that case.
> > 
> > This also means we have to record the mkwrite event in the .huge_fault()
> > path now. Move the dirty tracking logic to a
> > drm_gem_shmem_record_mkwrite() helper so it can also be called from
> > drm_gem_shmem_pfn_mkwrite().
> > 
> > Note that this wasn't a problem before commit 28e3918179aa
> > ("drm/gem-shmem: Track folio accessed/dirty status in mmap"), because
> > the pgprot were not lowered to read-only before this commit (see the
> > vma_wants_writenotify() in vma_set_page_prot()).
> > 
> > Fixes: 28e3918179aa ("drm/gem-shmem: Track folio accessed/dirty status in mmap")
> > Signed-off-by: Boris Brezillon <boris.brezillon@collabora.com>
> > Cc: Biju Das <biju.das.jz@bp.renesas.com>
> > Cc: Thomas Zimmermann <tzimmermann@suse.de>
> > Cc: Tommaso Merciai <tommaso.merciai.xr@bp.renesas.com>
> > ---
> > 
> > This patch is based on drm-tip [2], because that's the only branch
> > that has both [1] and the dirty tracking changes that live in
> > drm-misc-next.
> > 
> > Also added the THP maintainers in Cc, so I can hopefully get some
> > feedback on the fix. For instance, I'm still unsure
> > drm_gem_shmem_pfn_mkwrite() is race-free (do we need some locking
> > there? should we call folio_mark_dirty_lock()? should we call the
> > fault handler directly from there and have all the dirty tracking
> > in this .[huge_]fault path?).
> > 
> > [1]https://yhbt.net/lore/dri-devel/20260319015224.46896-1-pedrodemargomes@gmail.com/
> > [2]https://gitlab.freedesktop.org/drm/tip
> > ---
> >   drivers/gpu/drm/drm_gem_shmem_helper.c | 46 ++++++++++++++++++--------
> >   1 file changed, 32 insertions(+), 14 deletions(-)
> > 
> > diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c
> > index 2062ca607833..545933c7f712 100644
> > --- a/drivers/gpu/drm/drm_gem_shmem_helper.c
> > +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c
> > @@ -554,6 +554,21 @@ int drm_gem_shmem_dumb_create(struct drm_file *file, struct drm_device *dev,
> >   }
> >   EXPORT_SYMBOL_GPL(drm_gem_shmem_dumb_create);
> >   
> > +static void drm_gem_shmem_record_mkwrite(struct vm_fault *vmf)
> > +{
> > +	struct vm_area_struct *vma = vmf->vma;
> > +	struct drm_gem_object *obj = vma->vm_private_data;
> > +	struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj);
> > +	loff_t num_pages = obj->size >> PAGE_SHIFT;
> > +	pgoff_t page_offset = vmf->pgoff - vma->vm_pgoff; /* page offset within VMA */
> > +
> > +	if (drm_WARN_ON(obj->dev, !shmem->pages || page_offset >= num_pages))
> > +		return;

For full transparency, I'd like to mention the review bot complained [1]
about us not propagating the error to .pfn_mkwrite() as was done before
this patch. In practice, I don't think it matters much: if the pages are
gone and .pfn_mkwrite() is called, we're in trouble anyway, because a
read-only PTE pointing to this missing page exists already, and it
won't be removed if we return an error, it just won't be updated to
read-write.

> > +
> > +	file_update_time(vma->vm_file);
> > +	folio_mark_dirty(page_folio(shmem->pages[page_offset]));  
> 
> Unless we're sure the folio can't be truncated by another CPU, maybe we 
> should use folio_mark_dirty_lock() here.

In practice, we control when the file is truncated
(drm_gem_shmem_purge_locked()), and before we do that, we make sure to
kill all the CPU mappings (drm_vma_node_unmap() called before
shmem_truncate_range()). So I'd say we're good WRT this particular race.

> This is what's done for pages 
> (not PFNs) in mm/memory.c. Let's wait and see how it goes without 
> locking for now.

I agree, let's see how it goes and revisit later if needed.

> 
> Reviewed-by: Loïc Molinari <loic.molinari@collabora.com>

Thanks for the review. The patch has been queued to drm-misc-next-fixes.

Regards,

Boris

[1]https://lore.gitlab.freedesktop.org/drm-ai-reviews/review-patch1-20260320151914.586945-1-boris.brezillon@collabora.com/


^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2026-04-03  8:25 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-03-20 15:19 [PATCH] drm/shmem_helper: Make sure PMD entries get the writeable upgrade Boris Brezillon
2026-03-20 16:38 ` Tommaso Merciai
2026-03-30  8:06 ` Thomas Zimmermann
2026-03-31 15:33 ` Biju Das
2026-04-03  7:57 ` Loïc Molinari
2026-04-03  8:25   ` Boris Brezillon

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox