From: Matthew Auld <matthew.auld@intel.com>
To: "Thomas Hellström" <thomas.hellstrom@linux.intel.com>,
intel-xe@lists.freedesktop.org
Subject: Re: [PATCH 2/3] drm/xe: share bo dma-resv with backup object
Date: Mon, 14 Apr 2025 11:32:11 +0100 [thread overview]
Message-ID: <31710fae-5e44-44e4-94e4-e06b2410e639@intel.com> (raw)
In-Reply-To: <93df6de4ea8945ed0aeafb001f55d04b9e62d211.camel@linux.intel.com>
Hi,
On 11/04/2025 16:12, Thomas Hellström wrote:
> On Thu, 2025-04-10 at 17:20 +0100, Matthew Auld wrote:
>> We end up needing to grab both locks together anyway and keep them
>> held
>> until we complete the copy or add the fence. Plus the backup_obj is
>> short lived and tied to the parent object, so seems reasonable to
>> share
>> the same dma-resv. This will simplify the locking here, and in follow
>> up patches.
>>
>> Signed-off-by: Matthew Auld <matthew.auld@intel.com>
>> Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>
>
> Is there any chance that the bo dma-resv is freed before the backup
> object's resv is individualized?
>
> If not, perhaps a short description why that can never happen?
Thanks for reviewing. My thinking was that there should be one reference
on the backup bo, which is either dropped by the parent bo when calling
the unpin or the unprepare step, whoever gets there first. In both cases
there will still be a ref to the parent when we drop the backup ref. The
individualize step looks to be synchronous in ttm so it should happen
within the scope of holding the parent lock so parent bo can't disappear.
But as you say maybe this is inviting trouble later, if there is some
hypothetical way for something to grab an extra ref on the backup bo.
What about if in addition we also hold a ref to the parent bo, which is
then dropped after the individualize step:
https://gitlab.freedesktop.org/mwa/kernel/-/commit/3b72a079a1cd9da9590da82912798785ede8c97f
>
> /Thomas
>
>
>> ---
>> drivers/gpu/drm/xe/xe_bo.c | 24 +++++++++---------------
>> 1 file changed, 9 insertions(+), 15 deletions(-)
>>
>> diff --git a/drivers/gpu/drm/xe/xe_bo.c b/drivers/gpu/drm/xe/xe_bo.c
>> index c337790c81ae..3eab6352d9dc 100644
>> --- a/drivers/gpu/drm/xe/xe_bo.c
>> +++ b/drivers/gpu/drm/xe/xe_bo.c
>> @@ -1120,9 +1120,10 @@ int xe_bo_evict_pinned(struct xe_bo *bo)
>> if (bo->flags & XE_BO_FLAG_PINNED_NORESTORE)
>> goto out_unlock_bo;
>>
>> - backup = xe_bo_create_locked(xe, NULL, NULL, bo->size,
>> ttm_bo_type_kernel,
>> - XE_BO_FLAG_SYSTEM |
>> XE_BO_FLAG_NEEDS_CPU_ACCESS |
>> - XE_BO_FLAG_PINNED);
>> + backup = ___xe_bo_create_locked(xe, NULL, NULL, bo-
>>> ttm.base.resv, NULL, bo->size,
>> + DRM_XE_GEM_CPU_CACHING_WB,
>> ttm_bo_type_kernel,
>> + XE_BO_FLAG_SYSTEM |
>> XE_BO_FLAG_NEEDS_CPU_ACCESS |
>> + XE_BO_FLAG_PINNED);
>> if (IS_ERR(backup)) {
>> ret = PTR_ERR(backup);
>> goto out_unlock_bo;
>> @@ -1177,7 +1178,6 @@ int xe_bo_evict_pinned(struct xe_bo *bo)
>>
>> out_backup:
>> xe_bo_vunmap(backup);
>> - xe_bo_unlock(backup);
>> if (ret)
>> xe_bo_put(backup);
>> out_unlock_bo:
>> @@ -1212,17 +1212,12 @@ int xe_bo_restore_pinned(struct xe_bo *bo)
>> if (!backup)
>> return 0;
>>
>> - xe_bo_lock(backup, false);
>> + xe_bo_lock(bo, false);
>>
>> ret = ttm_bo_validate(&backup->ttm, &backup->placement,
>> &ctx);
>> if (ret)
>> goto out_backup;
>>
>> - if (WARN_ON(!dma_resv_trylock(bo->ttm.base.resv))) {
>> - ret = -EBUSY;
>> - goto out_backup;
>> - }
>> -
>> if (xe_bo_is_user(bo) || (bo->flags &
>> XE_BO_FLAG_PINNED_LATE_RESTORE)) {
>> struct xe_migrate *migrate;
>> struct dma_fence *fence;
>> @@ -1271,15 +1266,14 @@ int xe_bo_restore_pinned(struct xe_bo *bo)
>>
>> bo->backup_obj = NULL;
>>
>> +out_backup:
>> + xe_bo_vunmap(backup);
>> + if (!bo->backup_obj)
>> + xe_bo_put(backup);
>> out_unlock_bo:
>> if (unmap)
>> xe_bo_vunmap(bo);
>> xe_bo_unlock(bo);
>> -out_backup:
>> - xe_bo_vunmap(backup);
>> - xe_bo_unlock(backup);
>> - if (!bo->backup_obj)
>> - xe_bo_put(backup);
>> return ret;
>> }
>>
>
next prev parent reply other threads:[~2025-04-14 10:32 UTC|newest]
Thread overview: 19+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-04-10 16:20 [PATCH 0/3] PM notifier Matthew Auld
2025-04-10 16:20 ` [PATCH 1/3] drm/xe: evict user memory in " Matthew Auld
2025-04-11 15:05 ` Thomas Hellström
2025-04-10 16:20 ` [PATCH 2/3] drm/xe: share bo dma-resv with backup object Matthew Auld
2025-04-11 15:12 ` Thomas Hellström
2025-04-14 10:32 ` Matthew Auld [this message]
2025-04-15 9:19 ` Thomas Hellström
2025-04-10 16:20 ` [PATCH 3/3] drm/xe: handle pinned memory in PM notifier Matthew Auld
2025-04-15 9:57 ` Thomas Hellström
2025-04-15 10:27 ` Matthew Auld
2025-04-16 12:19 ` Thomas Hellström
2025-04-10 16:58 ` ✓ CI.Patch_applied: success for " Patchwork
2025-04-10 16:58 ` ✓ CI.checkpatch: " Patchwork
2025-04-10 16:59 ` ✓ CI.KUnit: " Patchwork
2025-04-10 17:08 ` ✓ CI.Build: " Patchwork
2025-04-10 17:10 ` ✓ CI.Hooks: " Patchwork
2025-04-10 17:11 ` ✓ CI.checksparse: " Patchwork
2025-04-10 17:57 ` ✓ Xe.CI.BAT: " Patchwork
2025-04-10 21:48 ` ✗ Xe.CI.Full: failure " Patchwork
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=31710fae-5e44-44e4-94e4-e06b2410e639@intel.com \
--to=matthew.auld@intel.com \
--cc=intel-xe@lists.freedesktop.org \
--cc=thomas.hellstrom@linux.intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox