Intel-XE Archive on lore.kernel.org
 help / color / mirror / Atom feed
From: Matthew Auld <matthew.auld@intel.com>
To: "Thomas Hellström" <thomas.hellstrom@linux.intel.com>,
	intel-xe@lists.freedesktop.org
Cc: himal.prasad.ghimiray@intel.com, Matthew Brost <matthew.brost@intel.com>
Subject: Re: [PATCH v3 3/5] drm/xe/bo: Add a bo remove callback
Date: Tue, 25 Mar 2025 10:08:05 +0000	[thread overview]
Message-ID: <ed599ccb-2027-4936-acc1-e01ef5f6c42d@intel.com> (raw)
In-Reply-To: <4edfc75d575412439efd6dcad15d875eb1c557f5.camel@linux.intel.com>

On 25/03/2025 09:07, Thomas Hellström wrote:
> On Tue, 2025-03-25 at 09:02 +0000, Matthew Auld wrote:
>> On 24/03/2025 16:54, Thomas Hellström wrote:
>>> On device unbind, migrate exported bos, including pagemap bos to
>>> system. This allows importers to take proper action without
>>> disruption. In particular, SVM clients on remote devices may
>>> continue as if nothing happened, and can chose a different
>>> placement.
>>>
>>> The evict_flags() placement is chosen in such a way that bos that
>>> aren't exported are purged.
>>>
>>> For pinned bos, we unmap DMA, but their pages are not freed yet
>>> since we can't be 100% sure they are not accessed.
>>>
>>> All pinned external bos (not just the VRAM ones) are put on the
>>> pinned.external list with this patch. But this only affects the
>>> xe_bo_pci_dev_remove_pinned() function since !VRAM bos are
>>> ignored by the suspend / resume functionality. As a follow-up we
>>> could look at removing the suspend / resume iteration over
>>> pinned external bos since we currently don't allow pinning
>>> external bos in VRAM, and other external bos don't need any
>>> special treatment at suspend / resume.
>>>
>>> v2:
>>> - Address review comments. (Matthew Auld).
>>> v3:
>>> - Don't introduce an external_evicted list (Matthew Auld)
>>> - Add a discussion around suspend / resume behaviour to the
>>>     commit message.
>>> - Formatting fixes.
>>>
>>> Signed-off-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
>>
>> Reviewed-by: Matthew Auld <matthew.auld@intel.com>
>>
> 
> Actually, there is a CI failure on LNL indicating that the pinned
> kernel-bo dma-maps are actually needed at devm-managed release.

Hmm, do you have a link? The failure I see looks to be more probe 
related? Once we do unplug(), outside the special evict all we do here 
we should pretty much not need dma-maps?

What about moving the evict_all into a well placed devm action during 
probe? Basically at the point at which we think it is reasonable to get 
rid of the dma-maps? Or is that what you mean below?

> 
> I'm in the process on testing this out on LNL, and if so I'll drop
> these dma-unmaps and we'd continue down the route of ensuring that
> these subsystems are indeed devm_ managed and not drmm_ managed.
> 
> Thanks,
> Thomas
> 
> 
> 


  reply	other threads:[~2025-03-25 10:08 UTC|newest]

Thread overview: 18+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-03-24 16:54 [PATCH v3 0/5] drm/xe: xe-only patches from the multi-device GPUSVM series Thomas Hellström
2025-03-24 16:54 ` [PATCH v3 1/5] drm/xe: Introduce CONFIG_DRM_XE_GPUSVM Thomas Hellström
2025-03-24 16:54 ` [PATCH v3 2/5] drm/xe/svm: Fix a potential bo UAF Thomas Hellström
2025-03-24 16:54 ` [PATCH v3 3/5] drm/xe/bo: Add a bo remove callback Thomas Hellström
2025-03-25  9:02   ` Matthew Auld
2025-03-25  9:07     ` Thomas Hellström
2025-03-25 10:08       ` Matthew Auld [this message]
2025-03-25 16:45         ` Thomas Hellström
2025-03-24 16:54 ` [PATCH v3 4/5] drm/xe/migrate: Allow xe_migrate_vram() also on non-pagefault capable devices Thomas Hellström
2025-03-24 16:55 ` [PATCH v3 5/5] drm/xe: Make the PT code handle placement per PTE rather than per vma / range Thomas Hellström
2025-03-24 17:00 ` ✓ CI.Patch_applied: success for drm/xe: xe-only patches from the multi-device GPUSVM series (rev5) Patchwork
2025-03-24 17:01 ` ✗ CI.checkpatch: warning " Patchwork
2025-03-24 17:02 ` ✓ CI.KUnit: success " Patchwork
2025-03-24 17:18 ` ✓ CI.Build: " Patchwork
2025-03-24 17:21 ` ✓ CI.Hooks: " Patchwork
2025-03-24 17:22 ` ✓ CI.checksparse: " Patchwork
2025-03-24 17:41 ` ✓ Xe.CI.BAT: " Patchwork
2025-03-24 19:37 ` ✗ Xe.CI.Full: failure " Patchwork

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=ed599ccb-2027-4936-acc1-e01ef5f6c42d@intel.com \
    --to=matthew.auld@intel.com \
    --cc=himal.prasad.ghimiray@intel.com \
    --cc=intel-xe@lists.freedesktop.org \
    --cc=matthew.brost@intel.com \
    --cc=thomas.hellstrom@linux.intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox