Intel-XE Archive on lore.kernel.org
 help / color / mirror / Atom feed
From: Francois Dugast <francois.dugast@intel.com>
To: Matthew Brost <matthew.brost@intel.com>
Cc: intel-xe@lists.freedesktop.org, dri-devel@lists.freedesktop.org,
	"Thomas Hellström" <thomas.hellstrom@linux.intel.com>,
	"Michal Mrozek" <michal.mrozek@intel.com>
Subject: Re: [PATCH v2 5/5] drm/pagemap: Enable THP support for GPU memory migration
Date: Tue, 6 Jan 2026 13:47:23 +0100	[thread overview]
Message-ID: <aV0EVEk-CMyI5OPr@fdugast-desk> (raw)
In-Reply-To: <aVviQ6VcADOqiG/j@lstrano-desk.jf.intel.com>

On Mon, Jan 05, 2026 at 08:09:39AM -0800, Matthew Brost wrote:
> On Mon, Jan 05, 2026 at 12:18:28PM +0100, Francois Dugast wrote:
> > This enables support for Transparent Huge Pages (THP) for device pages by
> > using MIGRATE_VMA_SELECT_COMPOUND during migration. It removes the need to
> > split folios and loop multiple times over all pages to perform required
> > operations at page level. Instead, we rely on newly introduced support for
> > higher orders in drm_pagemap and folio-level API.
> > 
> > In Xe, this drastically improves performance when using SVM. The GT stats
> > below collected after a 2MB page fault show overall servicing is more than
> > 7 times faster, and thanks to reduced CPU overhead the time spent on the
> > actual copy goes from 23% without THP to 80% with THP:
> > 
> > Without THP:
> > 
> >     svm_2M_pagefault_us: 966
> >     svm_2M_migrate_us: 942
> >     svm_2M_device_copy_us: 223
> >     svm_2M_get_pages_us: 9
> >     svm_2M_bind_us: 10
> > 
> > With THP:
> > 
> >     svm_2M_pagefault_us: 132
> >     svm_2M_migrate_us: 128
> >     svm_2M_device_copy_us: 106
> >     svm_2M_get_pages_us: 1
> >     svm_2M_bind_us: 2
> > 
> > v2:
> > - Fix one occurrence of drm_pagemap_get_devmem_page() (Matthew Brost)
> > 
> > Cc: Matthew Brost <matthew.brost@intel.com>
> > Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>
> > Cc: Michal Mrozek <michal.mrozek@intel.com>
> > Signed-off-by: Francois Dugast <francois.dugast@intel.com>
> > ---
> >  drivers/gpu/drm/drm_pagemap.c | 77 +++++++++++++++++++++++++++++++----
> >  drivers/gpu/drm/xe/xe_svm.c   |  4 ++
> >  include/drm/drm_pagemap.h     |  3 ++
> >  3 files changed, 76 insertions(+), 8 deletions(-)
> > 
> > diff --git a/drivers/gpu/drm/drm_pagemap.c b/drivers/gpu/drm/drm_pagemap.c
> > index 05e708730132..1ea8526ce946 100644
> > --- a/drivers/gpu/drm/drm_pagemap.c
> > +++ b/drivers/gpu/drm/drm_pagemap.c
> > @@ -200,16 +200,20 @@ static void drm_pagemap_migration_unlock_put_pages(unsigned long npages,
> >  /**
> >   * drm_pagemap_get_devmem_page() - Get a reference to a device memory page
> >   * @page: Pointer to the page
> > + * @order: Order
> >   * @zdd: Pointer to the GPU SVM zone device data
> >   *
> >   * This function associates the given page with the specified GPU SVM zone
> >   * device data and initializes it for zone device usage.
> >   */
> >  static void drm_pagemap_get_devmem_page(struct page *page,
> > +					unsigned int order,
> >  					struct drm_pagemap_zdd *zdd)
> >  {
> > -	page->zone_device_data = drm_pagemap_zdd_get(zdd);
> > -	zone_device_page_init(page, 0);
> > +	struct folio *folio = page_folio(page);
> > +
> > +	folio_set_zone_device_data(folio, drm_pagemap_zdd_get(zdd));
> > +	zone_device_page_init(page, order);
> >  }
> >  
> >  /**
> > @@ -522,7 +526,7 @@ int drm_pagemap_migrate_to_devmem(struct drm_pagemap_devmem *devmem_allocation,
> >  		.end		= end,
> >  		.pgmap_owner	= pagemap->owner,
> >  		.flags		= MIGRATE_VMA_SELECT_SYSTEM | MIGRATE_VMA_SELECT_DEVICE_COHERENT |
> > -		MIGRATE_VMA_SELECT_DEVICE_PRIVATE,
> > +		MIGRATE_VMA_SELECT_DEVICE_PRIVATE | MIGRATE_VMA_SELECT_COMPOUND,
> >  	};
> >  	unsigned long i, npages = npages_in_range(start, end);
> >  	unsigned long own_pages = 0, migrated_pages = 0;
> > @@ -628,10 +632,13 @@ int drm_pagemap_migrate_to_devmem(struct drm_pagemap_devmem *devmem_allocation,
> >  
> >  	own_pages = 0;
> >  
> > -	for (i = 0; i < npages; ++i) {
> > +	mutex_lock(&dpagemap->folio_split_lock);
> > +	for (i = 0; i < npages;) {
> > +		unsigned long j;
> >  		struct page *page = pfn_to_page(migrate.dst[i]);
> >  		struct page *src_page = migrate_pfn_to_page(migrate.src[i]);
> >  		cur.start = i;
> > +		unsigned int order;
> >  
> >  		pages[i] = NULL;
> >  		if (src_page && is_device_private_page(src_page)) {
> > @@ -658,7 +665,23 @@ int drm_pagemap_migrate_to_devmem(struct drm_pagemap_devmem *devmem_allocation,
> >  			pages[i] = page;
> >  		}
> >  		migrate.dst[i] = migrate_pfn(migrate.dst[i]);
> > -		drm_pagemap_get_devmem_page(page, zdd);
> > +
> > +		if (migrate.src[i] & MIGRATE_PFN_COMPOUND) {
> > +			order = HPAGE_PMD_ORDER;
> 
> Can we assert the folio order is HPAGE_PMD_ORDER? Eventually mTHP could
> be enabled on device folios where the order could be different than
> HPAGE_PMD_ORDER. I suspect we will need some small updates the GPU SVM /
> Xe to support mTHP and this assert would pop telling us we need to fix
> GPU SVM.

Yes good idea.

> 
> > +
> > +			migrate.dst[i] |= MIGRATE_PFN_COMPOUND;
> > +
> > +			for (j = 1; j < NR_PAGES(order) && i + j < npages; j++)
> > +				migrate.dst[i + j] = 0;
> > +
> > +		} else {
> > +			order = 0;
> > +
> > +			if (folio_order(page_folio(page)))
> > +				migrate_device_split_page(page);
> > +		}
> > +
> > +		drm_pagemap_get_devmem_page(page, order, zdd);
> >  
> >  		/* If we switched the migrating drm_pagemap, migrate previous pages now */
> >  		err = drm_pagemap_migrate_range(devmem_allocation, migrate.src, migrate.dst,
> > @@ -666,7 +689,11 @@ int drm_pagemap_migrate_to_devmem(struct drm_pagemap_devmem *devmem_allocation,
> >  						mdetails);
> >  		if (err)
> >  			goto err_finalize;
> > +
> > +		i += NR_PAGES(order);
> >  	}
> > +	mutex_unlock(&dpagemap->folio_split_lock);
> > +
> >  	cur.start = npages;
> >  	cur.ops = NULL; /* Force migration */
> >  	err = drm_pagemap_migrate_range(devmem_allocation, migrate.src, migrate.dst,
> > @@ -775,6 +802,8 @@ static int drm_pagemap_migrate_populate_ram_pfn(struct vm_area_struct *vas,
> >  		page = folio_page(folio, 0);
> >  		mpfn[i] = migrate_pfn(page_to_pfn(page));
> >  
> > +		if (order)
> > +			mpfn[i] |= MIGRATE_PFN_COMPOUND;
> >  next:
> >  		if (page)
> >  			addr += page_size(page);
> > @@ -1030,8 +1059,15 @@ int drm_pagemap_evict_to_ram(struct drm_pagemap_devmem *devmem_allocation)
> >  	if (err)
> >  		goto err_finalize;
> >  
> > -	for (i = 0; i < npages; ++i)
> > +	for (i = 0; i < npages;) {
> > +		unsigned int order = 0;
> > +
> >  		pages[i] = migrate_pfn_to_page(src[i]);
> > +		if (pages[i])
> > +			order = folio_order(page_folio(pages[i]));
> > +
> > +		i += NR_PAGES(order);
> > +	}
> >  
> >  	err = ops->copy_to_ram(pages, pagemap_addr, npages, NULL);
> >  	if (err)
> > @@ -1084,7 +1120,8 @@ static int __drm_pagemap_migrate_to_ram(struct vm_area_struct *vas,
> >  		.vma		= vas,
> >  		.pgmap_owner	= page_pgmap(page)->owner,
> >  		.flags		= MIGRATE_VMA_SELECT_DEVICE_PRIVATE |
> > -		MIGRATE_VMA_SELECT_DEVICE_COHERENT,
> > +		MIGRATE_VMA_SELECT_DEVICE_COHERENT |
> > +		MIGRATE_VMA_SELECT_COMPOUND,
> 
> The alignment was off here - e.g. my vi settings make the original code
> look like this:
> 
>                 .flags          = MIGRATE_VMA_SELECT_DEVICE_PRIVATE |
>                         MIGRATE_VMA_SELECT_DEVICE_COHERENT,
> 
> Since you are changing this line can you fix the alignment?

Sure.

> 
> >  		.fault_page	= page,
> >  	};
> >  	struct drm_pagemap_migrate_details mdetails = {};
> > @@ -1150,8 +1187,15 @@ static int __drm_pagemap_migrate_to_ram(struct vm_area_struct *vas,
> >  	if (err)
> >  		goto err_finalize;
> >  
> > -	for (i = 0; i < npages; ++i)
> > +	for (i = 0; i < npages;) {
> > +		unsigned int order = 0;
> > +
> >  		pages[i] = migrate_pfn_to_page(migrate.src[i]);
> > +		if (pages[i])
> > +			order = folio_order(page_folio(pages[i]));
> > +
> > +		i += NR_PAGES(order);
> > +	}
> >  
> >  	err = ops->copy_to_ram(pages, pagemap_addr, npages, NULL);
> >  	if (err)
> > @@ -1209,9 +1253,26 @@ static vm_fault_t drm_pagemap_migrate_to_ram(struct vm_fault *vmf)
> >  	return err ? VM_FAULT_SIGBUS : 0;
> >  }
> >  
> > +static void drm_pagemap_folio_split(struct folio *orig_folio, struct folio *new_folio)
> > +{
> > +	struct drm_pagemap_zdd *zdd;
> > +
> > +	if (!new_folio)
> > +		return;
> > +
> > +	new_folio->pgmap = orig_folio->pgmap;
> > +	zdd = folio_zone_device_data(orig_folio);
> > +	if (folio_order(new_folio))
> > +		folio_set_zone_device_data(new_folio, drm_pagemap_zdd_get(zdd));
> > +	else
> > +		folio_page(new_folio, 0)->zone_device_data =
> > +			drm_pagemap_zdd_get(zdd);
> 
> Is this if statement needed? I believe folio_set_zone_device_data can
> just be called.

Yes we can probably do so no matter what the order is.

> 
> > +}
> > +
> >  static const struct dev_pagemap_ops drm_pagemap_pagemap_ops = {
> >  	.folio_free = drm_pagemap_folio_free,
> >  	.migrate_to_ram = drm_pagemap_migrate_to_ram,
> > +	.folio_split = drm_pagemap_folio_split,
> >  };
> >  
> >  /**
> > diff --git a/drivers/gpu/drm/xe/xe_svm.c b/drivers/gpu/drm/xe/xe_svm.c
> > index fa2ee2c08f31..05dba6abbcc8 100644
> > --- a/drivers/gpu/drm/xe/xe_svm.c
> > +++ b/drivers/gpu/drm/xe/xe_svm.c
> > @@ -1760,6 +1760,10 @@ static struct xe_pagemap *xe_pagemap_create(struct xe_device *xe, struct xe_vram
> >  	if (err)
> >  		goto out_no_dpagemap;
> >  
> > +	err = drmm_mutex_init(&xe->drm, &dpagemap->folio_split_lock);
> > +	if (err)
> > +		goto out_err;
> > +
> >  	res = devm_request_free_mem_region(dev, &iomem_resource,
> >  					   vr->usable_size);
> >  	if (IS_ERR(res)) {
> > diff --git a/include/drm/drm_pagemap.h b/include/drm/drm_pagemap.h
> > index 736fb6cb7b33..3c8bacfc79e6 100644
> > --- a/include/drm/drm_pagemap.h
> > +++ b/include/drm/drm_pagemap.h
> > @@ -161,6 +161,7 @@ struct drm_pagemap_ops {
> >   * &struct drm_pagemap. May be NULL if no cache is used.
> >   * @shrink_link: Link into the shrinker's list of drm_pagemaps. Only
> >   * used if also using a pagemap cache.
> > + * @folio_split_lock: Lock to protect device folio splitting.
> >   */
> >  struct drm_pagemap {
> >  	const struct drm_pagemap_ops *ops;
> > @@ -170,6 +171,8 @@ struct drm_pagemap {
> >  	struct drm_pagemap_dev_hold *dev_hold;
> >  	struct drm_pagemap_cache *cache;
> >  	struct list_head shrink_link;
> > +	/* Protect device folio splitting */
> 
> I don't think you need this comment as we have kernel doc.

This is redundant but it is to avoid this checkpatch warning:

    -:248: CHECK:UNCOMMENTED_DEFINITION: struct mutex definition without comment
    #248: FILE: include/drm/drm_pagemap.h:174:
    +       struct mutex folio_split_lock;

Francois

> 
> Nits aside, patch looks good.
> 
> Matt
> 
> > +	struct mutex folio_split_lock;
> >  };
> >  
> >  struct drm_pagemap_devmem;
> > -- 
> > 2.43.0
> > 

  reply	other threads:[~2026-01-06 12:47 UTC|newest]

Thread overview: 16+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-01-05 11:18 [PATCH v2 0/5] Enable THP support in drm_pagemap Francois Dugast
2026-01-05 11:18 ` [PATCH v2 1/5] mm/migrate: Add migrate_device_split_page Francois Dugast
2026-01-05 11:18 ` [PATCH v2 2/5] drm/pagemap: Unlock and put folios when possible Francois Dugast
2026-01-05 11:18 ` [PATCH v2 3/5] drm/pagemap: Add helper to access zone_device_data Francois Dugast
2026-01-05 15:53   ` Matthew Brost
2026-01-05 11:18 ` [PATCH v2 4/5] DONOTMERGE drm/pagemap: Add drm_pagemap_cpages Francois Dugast
2026-01-05 16:51   ` Matthew Brost
2026-01-07  2:32   ` Matthew Brost
2026-01-05 11:18 ` [PATCH v2 5/5] drm/pagemap: Enable THP support for GPU memory migration Francois Dugast
2026-01-05 16:09   ` Matthew Brost
2026-01-06 12:47     ` Francois Dugast [this message]
2026-01-05 11:38 ` ✗ CI.checkpatch: warning for Enable THP support in drm_pagemap (rev2) Patchwork
2026-01-05 11:39 ` ✓ CI.KUnit: success " Patchwork
2026-01-05 11:58 ` ✗ CI.checksparse: warning " Patchwork
2026-01-05 12:19 ` ✓ Xe.CI.BAT: success " Patchwork
2026-01-05 14:04 ` ✗ Xe.CI.Full: failure " Patchwork

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=aV0EVEk-CMyI5OPr@fdugast-desk \
    --to=francois.dugast@intel.com \
    --cc=dri-devel@lists.freedesktop.org \
    --cc=intel-xe@lists.freedesktop.org \
    --cc=matthew.brost@intel.com \
    --cc=michal.mrozek@intel.com \
    --cc=thomas.hellstrom@linux.intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox