Intel-XE Archive on lore.kernel.org
 help / color / mirror / Atom feed
From: Matthew Auld <matthew.auld@intel.com>
To: "Matthew Brost" <matthew.brost@intel.com>,
	"Thomas Hellström" <thomas.hellstrom@linux.intel.com>
Cc: intel-xe@lists.freedesktop.org, dri-devel@lists.freedesktop.org
Subject: Re: [PATCH 3/7] drm/gpusvm: mark pages as dirty
Date: Fri, 21 Mar 2025 11:37:37 +0000	[thread overview]
Message-ID: <ecdf304f-437e-4a9b-b165-37570a81b158@intel.com> (raw)
In-Reply-To: <Z9xtgs/2etNDxStw@lstrano-desk.jf.intel.com>

On 20/03/2025 19:33, Matthew Brost wrote:
> On Thu, Mar 20, 2025 at 08:29:42PM +0100, Thomas Hellström wrote:
>> On Thu, 2025-03-20 at 17:30 +0000, Matthew Auld wrote:
>>> If the memory is going to be accessed by the device, make sure we
>>> mark
>>> the pages accordingly such that the kernel knows this. This aligns
>>> with
>>> the xe-userptr code.
>>>
>>> Signed-off-by: Matthew Auld <matthew.auld@intel.com>
>>> Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>
>>> Cc: Matthew Brost <matthew.brost@intel.com>
>>> ---
>>>   drivers/gpu/drm/drm_gpusvm.c | 9 +++++++++
>>>   1 file changed, 9 insertions(+)
>>>
>>> diff --git a/drivers/gpu/drm/drm_gpusvm.c
>>> b/drivers/gpu/drm/drm_gpusvm.c
>>> index 7f1cf5492bba..5b4ecd36dff1 100644
>>> --- a/drivers/gpu/drm/drm_gpusvm.c
>>> +++ b/drivers/gpu/drm/drm_gpusvm.c
>>> @@ -1471,6 +1471,7 @@ int drm_gpusvm_range_get_pages(struct
>>> drm_gpusvm *gpusvm,
>>>   			pages[i] = page;
>>>   		} else {
>>>   			dma_addr_t addr;
>>> +			unsigned int k;
>>>   
>>>   			if (is_zone_device_page(page) || zdd) {
>>>   				err = -EOPNOTSUPP;
>>> @@ -1489,6 +1490,14 @@ int drm_gpusvm_range_get_pages(struct
>>> drm_gpusvm *gpusvm,
>>>   			range->dma_addr[j] =
>>> drm_pagemap_device_addr_encode
>>>   				(addr, DRM_INTERCONNECT_SYSTEM,
>>> order,
>>>   				 dma_dir);
>>> +
>>> +			for (k = 0; k < 1u << order; k++) {
>>> +				if (!ctx->read_only)
>>> +					set_page_dirty_lock(page);
>>> +
>>> +				mark_page_accessed(page);
>>> +				page++;
>>> +			}
>>
>> Actually I think the userptr code did this unnecessarily. This is done
>> in the CPU page-fault handler, which means it's taken care of during
>> hmm_range_fault(). Now if the CPU PTE happens to be present and
>> writeable there will be no fault, but that was done when the page was
>> faulted in anyway.
>>
>> If there was a page cleaning event in between so the dirty flag was
>> dropped, then my understanding is that in addition to an invalidation
>> notifier, also the CPU PTE is zapped, so that it will be dirtied again
>> on the next write access, either by the CPU faulting the page or
>> hmm_range_fault() if there is a GPU page-fault.
>>
>> So I think we're good without this patch.
>>
> 
> I was going to suggest the same thing as Thomas - we are good without
> this patch for the reasons he states.

Ah, will drop this then. Thanks.

> 
> Matt
> 
>> /Thomas
>>
>>
>>
>>>   		}
>>>   		i += 1 << order;
>>>   		num_dma_mapped = i;
>>


  reply	other threads:[~2025-03-21 11:37 UTC|newest]

Thread overview: 25+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-03-20 17:29 [PATCH 0/7] Replace xe_hmm with gpusvm Matthew Auld
2025-03-20 17:29 ` [PATCH 1/7] drm/gpusvm: fix hmm_pfn_to_map_order() usage Matthew Auld
2025-03-20 19:52   ` Thomas Hellström
2025-03-20 20:40   ` Matthew Brost
2025-03-21 11:42     ` Matthew Auld
2025-03-20 17:29 ` [PATCH 2/7] drm/gpusvm: use more selective dma dir in get_pages() Matthew Auld
2025-03-20 19:31   ` Thomas Hellström
2025-03-20 19:37   ` Matthew Brost
2025-03-20 17:30 ` [PATCH 3/7] drm/gpusvm: mark pages as dirty Matthew Auld
2025-03-20 19:29   ` Thomas Hellström
2025-03-20 19:33     ` Matthew Brost
2025-03-21 11:37       ` Matthew Auld [this message]
2025-03-20 17:30 ` [PATCH 4/7] drm/gpusvm: pull out drm_gpusvm_pages substructure Matthew Auld
2025-03-20 20:43   ` Matthew Brost
2025-03-21 11:53     ` Matthew Auld
2025-03-20 17:30 ` [PATCH 5/7] drm/gpusvm: lower get/unmap pages Matthew Auld
2025-03-20 20:59   ` Matthew Brost
2025-03-20 17:30 ` [PATCH 6/7] drm/gpusvm: support basic_range Matthew Auld
2025-03-20 21:04   ` Matthew Brost
2025-03-20 17:30 ` [PATCH 7/7] drm/xe/userptr: replace xe_hmm with gpusvm Matthew Auld
2025-03-20 20:52   ` Matthew Brost
2025-03-25  9:50     ` Ghimiray, Himal Prasad
2025-03-20 17:37 ` ✓ CI.Patch_applied: success for Replace " Patchwork
2025-03-20 17:37 ` ✗ CI.checkpatch: warning " Patchwork
2025-03-20 17:38 ` ✗ CI.KUnit: failure " Patchwork

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=ecdf304f-437e-4a9b-b165-37570a81b158@intel.com \
    --to=matthew.auld@intel.com \
    --cc=dri-devel@lists.freedesktop.org \
    --cc=intel-xe@lists.freedesktop.org \
    --cc=matthew.brost@intel.com \
    --cc=thomas.hellstrom@linux.intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox