From: "Christian König" <christian.koenig@amd.com>
To: Tvrtko Ursulin <tursulin@ursulin.net>,
phasta@mailbox.org, alexdeucher@gmail.com,
simona.vetter@ffwll.ch, airlied@gmail.com,
felix.kuehling@amd.com, matthew.brost@intel.com
Cc: dri-devel@lists.freedesktop.org, amd-gfx@lists.freedesktop.org
Subject: Re: [PATCH 09/20] drm/sched: use inline locks for the drm-sched-fence
Date: Thu, 6 Nov 2025 14:23:41 +0100 [thread overview]
Message-ID: <1ddca1b3-e0d2-41bd-8708-10dd12f7e656@amd.com> (raw)
In-Reply-To: <21cbf337-45be-4418-b9dc-d3e2034b4962@ursulin.net>
On 11/4/25 16:12, Tvrtko Ursulin wrote:
>
> On 31/10/2025 13:16, Christian König wrote:
>> Just as proof of concept and minor cleanup.
>>
>> Signed-off-by: Christian König <christian.koenig@amd.com>
>> ---
>> drivers/gpu/drm/scheduler/sched_fence.c | 11 +++++------
>> include/drm/gpu_scheduler.h | 4 ----
>> 2 files changed, 5 insertions(+), 10 deletions(-)
>>
>> diff --git a/drivers/gpu/drm/scheduler/sched_fence.c b/drivers/gpu/drm/scheduler/sched_fence.c
>> index 9391d6f0dc01..7a94e03341cb 100644
>> --- a/drivers/gpu/drm/scheduler/sched_fence.c
>> +++ b/drivers/gpu/drm/scheduler/sched_fence.c
>> @@ -156,19 +156,19 @@ static void drm_sched_fence_set_deadline_finished(struct dma_fence *f,
>> struct dma_fence *parent;
>> unsigned long flags;
>> - spin_lock_irqsave(&fence->lock, flags);
>> + dma_fence_lock(f, flags);
>
> Moving to dma_fence_lock should either be a separate patch or squashed into the one which converts many other drivers. Even a separate patch before that previous patch would be better.
As far as I can see that won't work or would be at least rather tricky.
Previously from spin_lock_irqsave() locked drm_sched_fence->lock, but now it is locking dma_fence->lock.
That only works because we switched to using the internal lock.
> Naming wise, I however still think dma_fence_lock_irqsave would probably be better to stick with the same pattern everyone is so used too.
Oh, that is a good idea. Going to apply this to the patch set.
Regards,
Christian.
>
> Regards,
>
> Tvrtko
>
>> /* If we already have an earlier deadline, keep it: */
>> if (test_bit(DRM_SCHED_FENCE_FLAG_HAS_DEADLINE_BIT, &f->flags) &&
>> ktime_before(fence->deadline, deadline)) {
>> - spin_unlock_irqrestore(&fence->lock, flags);
>> + dma_fence_unlock(f, flags);
>> return;
>> }
>> fence->deadline = deadline;
>> set_bit(DRM_SCHED_FENCE_FLAG_HAS_DEADLINE_BIT, &f->flags);
>> - spin_unlock_irqrestore(&fence->lock, flags);
>> + dma_fence_unlock(f, flags);
>> /*
>> * smp_load_aquire() to ensure that if we are racing another
>> @@ -217,7 +217,6 @@ struct drm_sched_fence *drm_sched_fence_alloc(struct drm_sched_entity *entity,
>> fence->owner = owner;
>> fence->drm_client_id = drm_client_id;
>> - spin_lock_init(&fence->lock);
>> return fence;
>> }
>> @@ -230,9 +229,9 @@ void drm_sched_fence_init(struct drm_sched_fence *fence,
>> fence->sched = entity->rq->sched;
>> seq = atomic_inc_return(&entity->fence_seq);
>> dma_fence_init(&fence->scheduled, &drm_sched_fence_ops_scheduled,
>> - &fence->lock, entity->fence_context, seq);
>> + NULL, entity->fence_context, seq);
>> dma_fence_init(&fence->finished, &drm_sched_fence_ops_finished,
>> - &fence->lock, entity->fence_context + 1, seq);
>> + NULL, entity->fence_context + 1, seq);
>> }
>> module_init(drm_sched_fence_slab_init);
>> diff --git a/include/drm/gpu_scheduler.h b/include/drm/gpu_scheduler.h
>> index fb88301b3c45..b77f24a783e3 100644
>> --- a/include/drm/gpu_scheduler.h
>> +++ b/include/drm/gpu_scheduler.h
>> @@ -297,10 +297,6 @@ struct drm_sched_fence {
>> * belongs to.
>> */
>> struct drm_gpu_scheduler *sched;
>> - /**
>> - * @lock: the lock used by the scheduled and the finished fences.
>> - */
>> - spinlock_t lock;
>> /**
>> * @owner: job owner for debugging
>> */
>
next prev parent reply other threads:[~2025-11-06 13:23 UTC|newest]
Thread overview: 55+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-10-31 13:16 Independence for dma_fences! v2 Christian König
2025-10-31 13:16 ` [PATCH 01/20] dma-buf: cleanup dma_fence_describe v2 Christian König
2025-10-31 14:04 ` Tvrtko Ursulin
2025-10-31 13:16 ` [PATCH 02/20] dma-buf: rework stub fence initialisation v2 Christian König
2025-10-31 14:05 ` Tvrtko Ursulin
2025-11-04 15:01 ` Tvrtko Ursulin
2025-11-06 13:16 ` Christian König
2025-10-31 13:16 ` [PATCH 03/20] dma-buf: protected fence ops by RCU v2 Christian König
2025-10-31 14:29 ` Tvrtko Ursulin
2025-11-06 13:14 ` Christian König
2025-11-07 11:09 ` Tvrtko Ursulin
2025-10-31 13:16 ` [PATCH 04/20] dma-buf: detach fence ops on signal Christian König
2025-11-07 11:04 ` Philipp Stanner
2025-10-31 13:16 ` [PATCH 05/20] dma-buf: inline spinlock for fence protection Christian König
2025-11-07 11:59 ` Philipp Stanner
2025-10-31 13:16 ` [PATCH 06/20] dma-buf: use inline lock for the stub fence Christian König
2025-11-04 15:05 ` Tvrtko Ursulin
2025-10-31 13:16 ` [PATCH 07/20] dma-buf: use inline lock for the dma-fence-array Christian König
2025-11-05 8:50 ` Tvrtko Ursulin
2025-11-07 12:04 ` Philipp Stanner
2025-11-12 13:53 ` Christian König
2025-11-12 14:00 ` Philipp Stanner
2025-10-31 13:16 ` [PATCH 08/20] dma-buf: use inline lock for the dma-fence-chain Christian König
2025-11-04 15:08 ` Tvrtko Ursulin
2025-10-31 13:16 ` [PATCH 09/20] drm/sched: use inline locks for the drm-sched-fence Christian König
2025-11-04 15:12 ` Tvrtko Ursulin
2025-11-06 13:23 ` Christian König [this message]
2025-11-06 13:45 ` Tvrtko Ursulin
2025-11-07 8:33 ` Philipp Stanner
2025-11-12 13:58 ` Christian König
2025-10-31 13:16 ` [PATCH 10/20] drm/amdgpu: clean up and unify hw fence handling Christian König
2025-11-04 15:14 ` Tvrtko Ursulin
2025-10-31 13:16 ` [PATCH 11/20] drm/amdgpu: fix KFD eviction fence enable_signaling path Christian König
2025-11-04 16:28 ` Philipp Stanner
2025-11-06 13:43 ` Christian König
2025-11-06 16:37 ` Kuehling, Felix
2025-11-06 16:46 ` Christian König
2025-11-06 17:07 ` Kuehling, Felix
2025-11-06 17:09 ` Christian König
2025-11-06 17:25 ` Kuehling, Felix
2025-11-13 14:37 ` Christian König
2025-11-13 17:46 ` Kuehling, Felix
2025-10-31 13:16 ` [PATCH 12/20] drm/amdgpu: independence for the amdgpu_fence! Christian König
2025-10-31 13:16 ` [PATCH 13/20] drm/amdgpu: independence for the amdgpu_eviction_fence! Christian König
2025-11-04 15:45 ` Tvrtko Ursulin
2025-10-31 13:16 ` [PATCH 14/20] drm/amdgpu: independence for the amdgpu_vm_tlb_fence! Christian König
2025-11-04 15:45 ` Tvrtko Ursulin
2025-10-31 13:16 ` [PATCH 15/20] drm/amdgpu: independence for the amdkfd_fence! Christian König
2025-10-31 14:34 ` Kuehling, Felix
2025-10-31 13:16 ` [PATCH 16/20] drm/amdgpu: independence for the amdgpu_userq__fence! Christian König
2025-11-04 15:59 ` Tvrtko Ursulin
2025-10-31 13:16 ` [PATCH 17/20] drm/xe: Disconnect the low hanging fences from Xe module Christian König
2025-10-31 13:16 ` [PATCH 18/20] drm/xe: Drop HW fence slab Christian König
2025-10-31 13:16 ` [PATCH 19/20] drm/xe: Promote xe_hw_fence_irq to an ref counted object Christian König
2025-10-31 13:16 ` [PATCH 20/20] drm/xe: Finish disconnect HW fences from module Christian König
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1ddca1b3-e0d2-41bd-8708-10dd12f7e656@amd.com \
--to=christian.koenig@amd.com \
--cc=airlied@gmail.com \
--cc=alexdeucher@gmail.com \
--cc=amd-gfx@lists.freedesktop.org \
--cc=dri-devel@lists.freedesktop.org \
--cc=felix.kuehling@amd.com \
--cc=matthew.brost@intel.com \
--cc=phasta@mailbox.org \
--cc=simona.vetter@ffwll.ch \
--cc=tursulin@ursulin.net \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox