From: "Thomas Hellström" <thomas.hellstrom@linux.intel.com>
To: Matthew Brost <matthew.brost@intel.com>,
intel-xe@lists.freedesktop.org, dri-devel@lists.freedesktop.org
Cc: apopple@nvidia.com, airlied@gmail.com, christian.koenig@amd.com,
simona.vetter@ffwll.ch, felix.kuehling@amd.com, dakr@kernel.org
Subject: Re: [PATCH v3 03/30] mm/migrate: Trylock device page in do_swap_page
Date: Tue, 28 Jan 2025 18:26:29 +0100 [thread overview]
Message-ID: <e6e8571882bde6bf1887039ccc6aee989f395e95.camel@linux.intel.com> (raw)
In-Reply-To: <20241217233348.3519726-4-matthew.brost@intel.com>
On Tue, 2024-12-17 at 15:33 -0800, Matthew Brost wrote:
> Avoid multiple CPU page faults to the same device page racing by
> trying
> to lock the page in do_swap_page before taking an extra reference to
> the
> page. This prevents scenarios where multiple CPU page faults each
> take
> an extra reference to a device page, which could abort migration in
> folio_migrate_mapping. With the device page being locked in
> do_swap_page, the migrate_vma_* functions need to be updated to avoid
> locking the fault_page argument.
>
> Prior to this change, a livelock scenario could occur in Xe's (Intel
> GPU
> DRM driver) SVM implementation if enough threads faulted the same
> device
> page.
>
> v3:
> - Put page after unlocking page (Alistair)
> - Warn on spliting a TPH which is fault page (Alistair)
> - Warn on dst page == fault page (Alistair)
>
> Cc: Alistair Popple <apopple@nvidia.com>
> Cc: Philip Yang <Philip.Yang@amd.com>
> Cc: Felix Kuehling <felix.kuehling@amd.com>
> Cc: Christian König <christian.koenig@amd.com>
> Cc: Andrew Morton <akpm@linux-foundation.org>
> Suggested-by: Simona Vetter <simona.vetter@ffwll.ch>
> Signed-off-by: Matthew Brost <matthew.brost@intel.com>
> ---
> mm/memory.c | 13 ++++++---
> mm/migrate_device.c | 64 ++++++++++++++++++++++++++++++++-----------
> --
> 2 files changed, 55 insertions(+), 22 deletions(-)
>
> diff --git a/mm/memory.c b/mm/memory.c
> index 75c2dfd04f72..ae8b11131dad 100644
> --- a/mm/memory.c
> +++ b/mm/memory.c
> @@ -4267,10 +4267,15 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
> * Get a page reference while we know the
> page can't be
> * freed.
> */
> - get_page(vmf->page);
> - pte_unmap_unlock(vmf->pte, vmf->ptl);
> - ret = vmf->page->pgmap->ops-
> >migrate_to_ram(vmf);
> - put_page(vmf->page);
> + if (trylock_page(vmf->page)) {
> + get_page(vmf->page);
A couple of questions that I and Matt have been discussing around this
change and that we aren't completely cleare about. Perhaps somebody
else has some input:
1) The livelock occurs because we do a page reference check in
migrate_vma_check_page(). However judging from the documentation it
still uses a page refcount to determine whether it's pinned.
If we were to use folio_maybe_dma_pinned() which uses the pin-count
instead problem would be solved? However if there already is a refcount
that we don't know about, that holder could possibly upgrade it to a
pin-count. Does anybody know why not folio_maybe_dma_pinned() is used?
> + pte_unmap_unlock(vmf->pte, vmf-
> >ptl);
> + ret = vmf->page->pgmap->ops-
> >migrate_to_ram(vmf);
2) Second question is allocating memory under a page-lock. There
appears to be code avoiding that, arguing that page-locks may be taken
during reclaim. At least unless GFP_NOFS is used. Here we'd allow
GFP_KERNEL allocations under page-locks but can argue that it's only
device_private pages that are safe. Although some drivers do GFP_KERNEL
locks also under ordinary page-locks in migrate_to_ram(), probably
arguing that those pages are taken off LRU. Seems a bit fragile, is
there any known policy regarding this?
Thanks,
Thomas
> + unlock_page(vmf->page);
> + put_page(vmf->page);
> + } else {
> + pte_unmap_unlock(vmf->pte, vmf-
> >ptl);
> + }
> } else if (is_hwpoison_entry(entry)) {
> ret = VM_FAULT_HWPOISON;
> } else if (is_pte_marker_entry(entry)) {
> diff --git a/mm/migrate_device.c b/mm/migrate_device.c
> index 19960743f927..3470357d9bae 100644
> --- a/mm/migrate_device.c
> +++ b/mm/migrate_device.c
> @@ -60,6 +60,8 @@ static int migrate_vma_collect_pmd(pmd_t *pmdp,
> struct mm_walk *walk)
> {
> struct migrate_vma *migrate = walk->private;
> + struct folio *fault_folio = migrate->fault_page ?
> + page_folio(migrate->fault_page) : NULL;
> struct vm_area_struct *vma = walk->vma;
> struct mm_struct *mm = vma->vm_mm;
> unsigned long addr = start, unmapped = 0;
> @@ -88,11 +90,16 @@ static int migrate_vma_collect_pmd(pmd_t *pmdp,
>
> folio_get(folio);
> spin_unlock(ptl);
> + /* FIXME support THP */
> + if (WARN_ON_ONCE(fault_folio == folio))
> + return
> migrate_vma_collect_skip(start, end,
> + walk
> );
> if (unlikely(!folio_trylock(folio)))
> return
> migrate_vma_collect_skip(start, end,
> walk
> );
> ret = split_folio(folio);
> - folio_unlock(folio);
> + if (fault_folio != folio)
> + folio_unlock(folio);
> folio_put(folio);
> if (ret)
> return
> migrate_vma_collect_skip(start, end,
> @@ -192,7 +199,7 @@ static int migrate_vma_collect_pmd(pmd_t *pmdp,
> * optimisation to avoid walking the rmap later with
> * try_to_migrate().
> */
> - if (folio_trylock(folio)) {
> + if (fault_folio == folio || folio_trylock(folio)) {
> bool anon_exclusive;
> pte_t swp_pte;
>
> @@ -204,7 +211,8 @@ static int migrate_vma_collect_pmd(pmd_t *pmdp,
>
> if
> (folio_try_share_anon_rmap_pte(folio, page)) {
> set_pte_at(mm, addr, ptep,
> pte);
> - folio_unlock(folio);
> + if (fault_folio != folio)
> + folio_unlock(folio);
> folio_put(folio);
> mpfn = 0;
> goto next;
> @@ -363,6 +371,8 @@ static unsigned long
> migrate_device_unmap(unsigned long *src_pfns,
> unsigned long npages,
> struct page *fault_page)
> {
> + struct folio *fault_folio = fault_page ?
> + page_folio(fault_page) : NULL;
> unsigned long i, restore = 0;
> bool allow_drain = true;
> unsigned long unmapped = 0;
> @@ -427,7 +437,8 @@ static unsigned long
> migrate_device_unmap(unsigned long *src_pfns,
> remove_migration_ptes(folio, folio, 0);
>
> src_pfns[i] = 0;
> - folio_unlock(folio);
> + if (fault_folio != folio)
> + folio_unlock(folio);
> folio_put(folio);
> restore--;
> }
> @@ -536,6 +547,8 @@ int migrate_vma_setup(struct migrate_vma *args)
> return -EINVAL;
> if (args->fault_page && !is_device_private_page(args-
> >fault_page))
> return -EINVAL;
> + if (args->fault_page && !PageLocked(args->fault_page))
> + return -EINVAL;
>
> memset(args->src, 0, sizeof(*args->src) * nr_pages);
> args->cpages = 0;
> @@ -799,19 +812,13 @@ void migrate_vma_pages(struct migrate_vma
> *migrate)
> }
> EXPORT_SYMBOL(migrate_vma_pages);
>
> -/*
> - * migrate_device_finalize() - complete page migration
> - * @src_pfns: src_pfns returned from migrate_device_range()
> - * @dst_pfns: array of pfns allocated by the driver to migrate
> memory to
> - * @npages: number of pages in the range
> - *
> - * Completes migration of the page by removing special migration
> entries.
> - * Drivers must ensure copying of page data is complete and visible
> to the CPU
> - * before calling this.
> - */
> -void migrate_device_finalize(unsigned long *src_pfns,
> - unsigned long *dst_pfns, unsigned long
> npages)
> +static void __migrate_device_finalize(unsigned long *src_pfns,
> + unsigned long *dst_pfns,
> + unsigned long npages,
> + struct page *fault_page)
> {
> + struct folio *fault_folio = fault_page ?
> + page_folio(fault_page) : NULL;
> unsigned long i;
>
> for (i = 0; i < npages; i++) {
> @@ -824,6 +831,7 @@ void migrate_device_finalize(unsigned long
> *src_pfns,
>
> if (!page) {
> if (dst) {
> + WARN_ON_ONCE(fault_folio == dst);
> folio_unlock(dst);
> folio_put(dst);
> }
> @@ -834,6 +842,7 @@ void migrate_device_finalize(unsigned long
> *src_pfns,
>
> if (!(src_pfns[i] & MIGRATE_PFN_MIGRATE) || !dst) {
> if (dst) {
> + WARN_ON_ONCE(fault_folio == dst);
> folio_unlock(dst);
> folio_put(dst);
> }
> @@ -841,7 +850,8 @@ void migrate_device_finalize(unsigned long
> *src_pfns,
> }
>
> remove_migration_ptes(src, dst, 0);
> - folio_unlock(src);
> + if (fault_folio != src)
> + folio_unlock(src);
>
> if (folio_is_zone_device(src))
> folio_put(src);
> @@ -849,6 +859,7 @@ void migrate_device_finalize(unsigned long
> *src_pfns,
> folio_putback_lru(src);
>
> if (dst != src) {
> + WARN_ON_ONCE(fault_folio == dst);
> folio_unlock(dst);
> if (folio_is_zone_device(dst))
> folio_put(dst);
> @@ -857,6 +868,22 @@ void migrate_device_finalize(unsigned long
> *src_pfns,
> }
> }
> }
> +
> +/*
> + * migrate_device_finalize() - complete page migration
> + * @src_pfns: src_pfns returned from migrate_device_range()
> + * @dst_pfns: array of pfns allocated by the driver to migrate
> memory to
> + * @npages: number of pages in the range
> + *
> + * Completes migration of the page by removing special migration
> entries.
> + * Drivers must ensure copying of page data is complete and visible
> to the CPU
> + * before calling this.
> + */
> +void migrate_device_finalize(unsigned long *src_pfns,
> + unsigned long *dst_pfns, unsigned long
> npages)
> +{
> + return __migrate_device_finalize(src_pfns, dst_pfns, npages,
> NULL);
> +}
> EXPORT_SYMBOL(migrate_device_finalize);
>
> /**
> @@ -872,7 +899,8 @@ EXPORT_SYMBOL(migrate_device_finalize);
> */
> void migrate_vma_finalize(struct migrate_vma *migrate)
> {
> - migrate_device_finalize(migrate->src, migrate->dst, migrate-
> >npages);
> + __migrate_device_finalize(migrate->src, migrate->dst,
> migrate->npages,
> + migrate->fault_page);
> }
> EXPORT_SYMBOL(migrate_vma_finalize);
>
next prev parent reply other threads:[~2025-01-28 17:26 UTC|newest]
Thread overview: 54+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-12-17 23:33 [PATCH v3 00/30] Introduce GPU SVM and Xe SVM implementation Matthew Brost
2024-12-17 23:33 ` [PATCH v3 01/30] drm/xe: Retry BO allocation Matthew Brost
2025-01-07 12:28 ` Gwan-gyeong Mun
2025-01-07 21:56 ` Summers, Stuart
2024-12-17 23:33 ` [PATCH v3 02/30] mm/migrate: Add migrate_device_pfns Matthew Brost
2024-12-17 23:33 ` [PATCH v3 03/30] mm/migrate: Trylock device page in do_swap_page Matthew Brost
2025-01-28 17:26 ` Thomas Hellström [this message]
2025-01-28 19:46 ` Matthew Brost
2024-12-17 23:33 ` [PATCH v3 04/30] drm/pagemap: Add DRM pagemap Matthew Brost
2025-01-24 7:19 ` Gwan-gyeong Mun
2025-01-29 17:42 ` Matthew Brost
2024-12-17 23:33 ` [PATCH v3 05/30] drm/gpusvm: Add support for GPU Shared Virtual Memory Matthew Brost
2024-12-20 19:04 ` Matthew Brost
2025-01-08 1:30 ` Matthew Brost
2025-01-10 21:17 ` Matthew Brost
2025-01-17 8:26 ` Gwan-gyeong Mun
2025-01-17 18:53 ` Matthew Brost
2025-01-24 7:17 ` Gwan-gyeong Mun
2024-12-17 23:33 ` [PATCH v3 06/30] drm/xe: Select DRM_GPUSVM Kconfig Matthew Brost
2024-12-17 23:33 ` [PATCH v3 07/30] drm/xe/uapi: Add DRM_XE_VM_BIND_FLAG_CPU_ADDR_MIRROR flag Matthew Brost
2024-12-17 23:33 ` [PATCH v3 08/30] drm/xe: Add SVM init / close / fini to faulting VMs Matthew Brost
2024-12-17 23:33 ` [PATCH v3 09/30] drm/xe: Add dma_addr res cursor Matthew Brost
2024-12-17 23:33 ` [PATCH v3 10/30] drm/xe: Nuke VM's mapping upon close Matthew Brost
2024-12-17 23:33 ` [PATCH v3 11/30] drm/xe: Add SVM range invalidation and page fault handler Matthew Brost
2024-12-17 23:33 ` [PATCH v3 12/30] drm/gpuvm: Add DRM_GPUVA_OP_DRIVER Matthew Brost
2024-12-17 23:33 ` [PATCH v3 13/30] drm/xe: Add (re)bind to SVM page fault handler Matthew Brost
2024-12-17 23:33 ` [PATCH v3 14/30] drm/xe: Add SVM garbage collector Matthew Brost
2024-12-17 23:33 ` [PATCH v3 15/30] drm/xe: Add unbind to " Matthew Brost
2024-12-20 18:50 ` Ghimiray, Himal Prasad
2024-12-20 18:54 ` Matthew Brost
2024-12-17 23:33 ` [PATCH v3 16/30] drm/xe: Do not allow CPU address mirror VMA unbind if the GPU has bindings Matthew Brost
2024-12-17 23:33 ` [PATCH v3 17/30] drm/xe: Enable CPU address mirror uAPI Matthew Brost
2024-12-17 23:33 ` [PATCH v3 18/30] drm/xe: Add migrate layer functions for SVM support Matthew Brost
2024-12-17 23:33 ` [PATCH v3 19/30] drm/xe: Add SVM device memory mirroring Matthew Brost
2024-12-20 18:39 ` Ghimiray, Himal Prasad
2024-12-20 18:45 ` Matthew Brost
2024-12-17 23:33 ` [PATCH v3 20/30] drm/xe: Add drm_gpusvm_devmem to xe_bo Matthew Brost
2024-12-17 23:33 ` [PATCH v3 21/30] drm/xe: Add drm_pagemap ops to SVM Matthew Brost
2024-12-17 23:33 ` [PATCH v3 22/30] drm/xe: Add GPUSVM device memory copy vfunc functions Matthew Brost
2024-12-17 23:33 ` [PATCH v3 23/30] drm/xe: Add Xe SVM populate_devmem_pfn GPU SVM vfunc Matthew Brost
2024-12-17 23:33 ` [PATCH v3 24/30] drm/xe: Add Xe SVM devmem_release " Matthew Brost
2024-12-17 23:33 ` [PATCH v3 25/30] drm/xe: Add BO flags required for SVM Matthew Brost
2024-12-17 23:33 ` [PATCH v3 26/30] drm/xe: Add SVM VRAM migration Matthew Brost
2024-12-17 23:33 ` [PATCH v3 27/30] drm/xe: Basic SVM BO eviction Matthew Brost
2024-12-19 3:42 ` Matthew Brost
2024-12-17 23:33 ` [PATCH v3 28/30] drm/xe: Add SVM debug Matthew Brost
2024-12-17 23:33 ` [PATCH v3 29/30] drm/xe: Add modparam for SVM notifier size Matthew Brost
2024-12-17 23:33 ` [PATCH v3 30/30] drm/xe: Add always_migrate_to_vram modparam Matthew Brost
2024-12-18 4:07 ` ✓ CI.Patch_applied: success for Introduce GPU SVM and Xe SVM implementation (rev3) Patchwork
2024-12-18 4:08 ` ✗ CI.checkpatch: warning " Patchwork
2024-12-18 4:09 ` ✗ CI.KUnit: failure " Patchwork
2025-01-07 12:19 ` [PATCH v3 00/30] Introduce GPU SVM and Xe SVM implementation Gwan-gyeong Mun
2025-01-17 9:47 ` Gwan-gyeong Mun
2025-01-21 21:14 ` Matthew Brost
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=e6e8571882bde6bf1887039ccc6aee989f395e95.camel@linux.intel.com \
--to=thomas.hellstrom@linux.intel.com \
--cc=airlied@gmail.com \
--cc=apopple@nvidia.com \
--cc=christian.koenig@amd.com \
--cc=dakr@kernel.org \
--cc=dri-devel@lists.freedesktop.org \
--cc=felix.kuehling@amd.com \
--cc=intel-xe@lists.freedesktop.org \
--cc=matthew.brost@intel.com \
--cc=simona.vetter@ffwll.ch \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox