public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Boris Brezillon <boris.brezillon@collabora.com>
To: Mark Brown <broonie@kernel.org>
Cc: Dave Airlie <airlied@redhat.com>,
	DRI <dri-devel@lists.freedesktop.org>,
	Linux Kernel Mailing List <linux-kernel@vger.kernel.org>,
	Linux Next Mailing List <linux-next@vger.kernel.org>,
	Pedro Demarchi Gomes <pedrodemargomes@gmail.com>,
	Thomas Zimmermann <tzimmermann@suse.de>
Subject: Re: linux-next: manual merge of the drm tree with the drm-misc-fixes tree
Date: Fri, 20 Mar 2026 16:39:02 +0100	[thread overview]
Message-ID: <20260320163902.24571f5d@fedora> (raw)
In-Reply-To: <ab1XClKfaCMYzg5-@sirena.org.uk>

Hello Mark,

On Fri, 20 Mar 2026 14:17:46 +0000
Mark Brown <broonie@kernel.org> wrote:

> Hi all,
> 
> Today's linux-next merge of the drm tree got a conflict in:
> 
>   drivers/gpu/drm/drm_gem_shmem_helper.c
> 
> between commit:
> 
>   fc3bbf34e643f ("drm/shmem-helper: Fix huge page mapping in fault handler")
> 
> from the drm-misc-fixes tree and commits:
> 
>   5cf8de6cd1620 ("drm/gem-shmem: Return vm_fault_t from drm_gem_shmem_try_map_pmd()")
>   06f3662cb3ba9 ("drm/gem-shmem: Refactor drm_gem_shmem_try_map_pmd()")
> 
> from the drm tree.
> 
> I fixed it up (see below) and can carry the fix as necessary. This
> is now fixed as far as linux-next is concerned, but any non trivial
> conflicts should be mentioned to your upstream maintainer when your tree
> is submitted for merging.  You may also want to consider cooperating
> with the maintainer of the conflicting tree to minimise any particularly
> complex conflicts.

I have a slightly different conflict resolution (it's the one we currently
have in drm-tip[1]).

Regards,

Boris

[1]https://gitlab.freedesktop.org/drm/tip

--->8---

diff --cc drivers/gpu/drm/drm_gem_shmem_helper.c
index c549293b5bb6,4500deef4127..2062ca607833
--- a/drivers/gpu/drm/drm_gem_shmem_helper.c
+++ b/drivers/gpu/drm/drm_gem_shmem_helper.c
@@@ -574,33 -574,39 +578,38 @@@ static vm_fault_t drm_gem_shmem_any_fau
  {
        struct vm_area_struct *vma = vmf->vma;
        struct drm_gem_object *obj = vma->vm_private_data;
+       struct drm_device *dev = obj->dev;
        struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj);
        loff_t num_pages = obj->size >> PAGE_SHIFT;
-       vm_fault_t ret;
+       vm_fault_t ret = VM_FAULT_SIGBUS;
        struct page **pages = shmem->pages;
-       pgoff_t page_offset;
+       pgoff_t page_offset = vmf->pgoff - vma->vm_pgoff; /* page offset within VMA */
+       struct page *page;
+       struct folio *folio;
        unsigned long pfn;
  
 +      if (order && order != PMD_ORDER)
 +              return VM_FAULT_FALLBACK;
 +
-       /* Offset to faulty address in the VMA. */
-       page_offset = vmf->pgoff - vma->vm_pgoff;
+       dma_resv_lock(obj->resv, NULL);
  
-       dma_resv_lock(shmem->base.resv, NULL);
- 
-       if (page_offset >= num_pages ||
-           drm_WARN_ON_ONCE(obj->dev, !shmem->pages) ||
-           shmem->madv < 0) {
-               ret = VM_FAULT_SIGBUS;
+       if (page_offset >= num_pages || drm_WARN_ON_ONCE(dev, !shmem->pages) ||
+           shmem->madv < 0)
                goto out;
-       }
  
-       pfn = page_to_pfn(pages[page_offset]);
+       page = pages[page_offset];
+       if (drm_WARN_ON_ONCE(dev, !page))
+               goto out;
+       folio = page_folio(page);
+ 
+       pfn = page_to_pfn(page);
+ 
 -      if (folio_test_pmd_mappable(folio))
 -              ret = drm_gem_shmem_try_insert_pfn_pmd(vmf, pfn);
 -      if (ret != VM_FAULT_NOPAGE)
 -              ret = vmf_insert_pfn(vma, vmf->address, pfn);
 -
 +      ret = try_insert_pfn(vmf, order, pfn);
+       if (ret == VM_FAULT_NOPAGE)
+               folio_mark_accessed(folio);
  
-  out:
-       dma_resv_unlock(shmem->base.resv);
+ out:
+       dma_resv_unlock(obj->resv);
  
        return ret;
  }
@@@ -644,13 -645,29 +653,32 @@@ static void drm_gem_shmem_vm_close(stru
        drm_gem_vm_close(vma);
  }
  
+ static vm_fault_t drm_gem_shmem_pfn_mkwrite(struct vm_fault *vmf)
+ {
+       struct vm_area_struct *vma = vmf->vma;
+       struct drm_gem_object *obj = vma->vm_private_data;
+       struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj);
+       loff_t num_pages = obj->size >> PAGE_SHIFT;
+       pgoff_t page_offset = vmf->pgoff - vma->vm_pgoff; /* page offset within VMA */
+ 
+       if (drm_WARN_ON(obj->dev, !shmem->pages || page_offset >= num_pages))
+               return VM_FAULT_SIGBUS;
+ 
+       file_update_time(vma->vm_file);
+ 
+       folio_mark_dirty(page_folio(shmem->pages[page_offset]));
+ 
+       return 0;
+ }
+ 
  const struct vm_operations_struct drm_gem_shmem_vm_ops = {
        .fault = drm_gem_shmem_fault,
 +#ifdef CONFIG_ARCH_SUPPORTS_PMD_PFNMAP
 +      .huge_fault = drm_gem_shmem_any_fault,
 +#endif
        .open = drm_gem_shmem_vm_open,
        .close = drm_gem_shmem_vm_close,
+       .pfn_mkwrite = drm_gem_shmem_pfn_mkwrite,
  };
  EXPORT_SYMBOL_GPL(drm_gem_shmem_vm_ops);
  

  reply	other threads:[~2026-03-20 15:39 UTC|newest]

Thread overview: 49+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-03-20 14:17 linux-next: manual merge of the drm tree with the drm-misc-fixes tree Mark Brown
2026-03-20 15:39 ` Boris Brezillon [this message]
  -- strict thread matches above, loose matches on Subject: below --
2026-03-18 14:36 Mark Brown
2026-03-18 15:49 ` Luca Ceresoli
2026-03-18 18:20   ` Cristian Ciocaltea
2026-01-05  2:21 Stephen Rothwell
2025-08-20  1:21 Stephen Rothwell
2025-08-20 10:30 ` Danilo Krummrich
2025-08-20 21:29   ` Stephen Rothwell
2025-07-18  4:41 Stephen Rothwell
2025-07-18  6:27 ` Thomas Zimmermann
2025-01-14  1:30 Stephen Rothwell
2023-11-22  0:29 Stephen Rothwell
2023-11-28 10:04 ` Geert Uytterhoeven
2023-09-28  2:05 Stephen Rothwell
2023-06-27  1:54 Stephen Rothwell
2023-07-11  1:17 ` Stephen Rothwell
2022-11-21  2:06 Stephen Rothwell
2022-07-11  2:47 Stephen Rothwell
2022-07-11  8:05 ` Christian König
2022-07-17 23:44   ` Stephen Rothwell
2022-07-19  7:35     ` Geert Uytterhoeven
2022-07-27  2:55     ` Stephen Rothwell
2022-07-27  3:24       ` Dave Airlie
2022-07-27  5:37         ` Stephen Rothwell
2022-03-18  0:55 Stephen Rothwell
2022-03-18  1:06 ` Stephen Rothwell
2021-12-22  3:50 Stephen Rothwell
2021-12-22  7:31 ` Christian König
2021-11-29 23:33 Stephen Rothwell
2021-11-30  8:58 ` Maxime Ripard
2021-11-30 20:35   ` Stephen Rothwell
2021-10-22  0:53 Stephen Rothwell
2021-06-17  1:42 Stephen Rothwell
2021-04-09  3:12 Stephen Rothwell
2021-03-18  1:02 Stephen Rothwell
2021-03-18  6:51 ` Tomi Valkeinen
2020-07-28  3:41 Stephen Rothwell
2020-05-01  3:45 Stephen Rothwell
2020-03-01 23:43 Stephen Rothwell
2019-09-15 21:18 Mark Brown
2019-09-16  5:29 ` Vasily Khoruzhick
2019-09-17  2:43   ` Qiang Yu
2019-08-26  3:06 Stephen Rothwell
2019-08-29 10:11 ` james qian wang (Arm Technology China)
2018-11-26  2:37 Stephen Rothwell
2018-03-08  0:47 Stephen Rothwell
2017-12-13 23:59 Stephen Rothwell
2017-01-17  0:59 Stephen Rothwell

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20260320163902.24571f5d@fedora \
    --to=boris.brezillon@collabora.com \
    --cc=airlied@redhat.com \
    --cc=broonie@kernel.org \
    --cc=dri-devel@lists.freedesktop.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-next@vger.kernel.org \
    --cc=pedrodemargomes@gmail.com \
    --cc=tzimmermann@suse.de \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox