From: Nirmoy Das <nirmoy.das@intel.com>
To: Matthew Auld <matthew.auld@intel.com>, <intel-xe@lists.freedesktop.org>
Subject: Re: [PATCH v3] drm/xe/ufence: Signal ufence immediately when possible
Date: Fri, 18 Oct 2024 17:29:09 +0200 [thread overview]
Message-ID: <cdddde09-428b-45b5-a058-3a7a8b80bf77@intel.com> (raw)
In-Reply-To: <9fb1e3a2-c4ab-4520-996f-b5d20f634093@intel.com>
On 10/18/2024 4:53 PM, Matthew Auld wrote:
> On 18/10/2024 15:40, Nirmoy Das wrote:
>>
>> On 10/18/2024 4:23 PM, Matthew Auld wrote:
>>> On 18/10/2024 13:47, Nirmoy Das wrote:
>>>> If the backing fence is signaled then signal ufence immediately.
>>>> This should reduce load from the xe ordered_wq and also won't block
>>>> signaling a ufence which doesn't require any serialization.
>>>>
>>>> v2: fix system_wq typo
>>>> v3: signal immediately instead of queuing in system_wq (Matt B)
>>>>
>>>> Link: https://gitlab.freedesktop.org/drm/xe/kernel/-/issues/1630
>>>> Cc: Matthew Auld <matthew.auld@intel.com>
>>>> gc: Matthew Brost <matthew.brost@intel.com>
>>>
>>> s/gc/Cc
>>>
>>>> Signed-off-by: Nirmoy Das <nirmoy.das@intel.com>
>>>> ---
>>>> drivers/gpu/drm/xe/xe_sync.c | 15 +++++++++++----
>>>> 1 file changed, 11 insertions(+), 4 deletions(-)
>>>>
>>>> diff --git a/drivers/gpu/drm/xe/xe_sync.c b/drivers/gpu/drm/xe/xe_sync.c
>>>> index c6cf227ead40..069c1e4ebea5 100644
>>>> --- a/drivers/gpu/drm/xe/xe_sync.c
>>>> +++ b/drivers/gpu/drm/xe/xe_sync.c
>>>> @@ -72,10 +72,8 @@ static struct xe_user_fence *user_fence_create(struct xe_device *xe, u64 addr,
>>>> return ufence;
>>>> }
>>>> -static void user_fence_worker(struct work_struct *w)
>>>> +static void signal_user_fence(struct xe_user_fence *ufence)
>>>> {
>>>> - struct xe_user_fence *ufence = container_of(w, struct xe_user_fence, worker);
>>>> -
>>>> if (mmget_not_zero(ufence->mm)) {
>>>> kthread_use_mm(ufence->mm);
>>>> if (copy_to_user(ufence->addr, &ufence->value, sizeof(ufence->value)))
>>>
>>> This can end up in a CPU fault handler? There might be some locking issues if caller is say holding dma-resv. For example the caller in xe_exec which is holding dma-resv. If it can indeed hit this path, then we might get some splats/deadlocks, I think.
>>
>>
>> What is the connection between writting into ufence addr and dma-resv ? Trying to understand this locking problem.
>
> Basically the user can have the ufence be an mmap address from a BO, so it can basically hit xe_gem_fault() here. The mmap lock should already be tainted with dma-resv, so might_fault() should complain.
I see what you mean, haven't thought about it.
>
>>
>>
>> it looks like I have to use a worker anyway to do kthread_use_mm(), https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-140169v1/bat-atsm-2/igt@xe_exec_balancer@no-exec-cm-virtual-basic.html
>
> Yes, exactly that just with might_fault() in copy_to_user. Good to see that CI caught this. From the logs we can also see the exact dma-resv splat as per above:
Back to the previous rev but with the bool as suggested by Matt B.
Thanks,
Nirmoy
>
> 4> [233.110447] xe_exec_balance/3613 is trying to acquire lock:
> <4> [233.110457] ff11000100085998 (&mm->mmap_lock){++++}-{3:3}, at: __might_fault+0x43/0x90
> <4> [233.110481]
> but task is already holding lock:
> <4> [233.110491] ff110001231a1da0 (reservation_ww_class_mutex){+.+.}-{3:3}, at: drm_exec_lock_obj+0x88/0x2b0 [drm_exec]
> <4> [233.110517]
> which lock already depends on the new lock.
> <4> [233.110530]
> the existing dependency chain (in reverse order) is:
> <4> [233.110540]
> -> #2 (reservation_ww_class_mutex){+.+.}-{3:3}:
> <4> [233.110558] __ww_mutex_lock.constprop.0+0xe1/0x14d0
> <4> [233.110574] ww_mutex_lock+0x3c/0xa0
> <4> [233.110586] dma_resv_lockdep+0x1a4/0x340
> <4> [233.110599] do_one_initcall+0x76/0x3e0
> <4> [233.110615] kernel_init_freeable+0x3dc/0x690
> <4> [233.110632] kernel_init+0x1b/0x200
> <4> [233.110645] ret_from_fork+0x3a/0x60
> <4> [233.110658] ret_from_fork_asm+0x1a/0x30
> <4> [233.110671]
> -> #1 (reservation_ww_class_acquire){+.+.}-{0:0}:
> <4> [233.110689] dma_resv_lockdep+0x180/0x340
> <4> [233.110699] do_one_initcall+0x76/0x3e0
> <4> [233.110713] kernel_init_freeable+0x3dc/0x690
> <4> [233.110728] kernel_init+0x1b/0x200
> <4> [233.110740] ret_from_fork+0x3a/0x60
> <4> [233.110752] ret_from_fork_asm+0x1a/0x30
> <4> [233.110764]
> -> #0 (&mm->mmap_lock){++++}-{3:3}:
> <4> [233.110780] __lock_acquire+0x1623/0x2800
> <4> [233.110794] lock_acquire+0xc5/0x2f0
> <4> [233.110807] __might_fault+0x63/0x90
> <4> [233.110818] _copy_to_user+0x23/0x70
> <4> [233.110830] signal_user_fence+0x46/0xd0 [xe]
> <4> [233.111108] xe_sync_entry_signal+0x14e/0x1b0 [xe]
> <4> [233.111366] vm_bind_ioctl_ops_execute+0x3f8/0x910 [xe]
> <4> [233.111665] xe_vm_bind_ioctl+0x1623/0x22a0 [xe]
> <4> [233.111951] drm_ioctl_kernel+0xb1/0x120 [drm]
> <4> [233.112052] drm_ioctl+0x2e8/0x5a0 [drm]
> <4> [233.112140] xe_drm_ioctl+0x53/0x80 [xe]
> <4> [233.112331] __x64_sys_ioctl+0x95/0xd0
> <4> [233.112342] x64_sys_call+0x1089/0x2060
> <4> [233.112355] do_syscall_64+0x87/0x140
> <4> [233.112365] entry_SYSCALL_64_after_hwframe+0x76/0x7e
> <4> [233.112380]
>
>>
>>
>> Regards,
>>
>> Nirmoy
>>
>>>
>>>> @@ -89,6 +87,14 @@ static void user_fence_worker(struct work_struct *w)
>>>> user_fence_put(ufence);
>>>> }
>>>> +static void user_fence_worker(struct work_struct *w)
>>>> +{
>>>> + struct xe_user_fence *ufence = container_of(w, struct xe_user_fence,
>>>> + worker);
>>>> +
>>>> + signal_user_fence(ufence);
>>>> +}
>>>> +
>>>> static void kick_ufence(struct xe_user_fence *ufence, struct dma_fence *fence)
>>>> {
>>>> INIT_WORK(&ufence->worker, user_fence_worker);
>>>> @@ -236,7 +242,8 @@ void xe_sync_entry_signal(struct xe_sync_entry *sync, struct dma_fence *fence)
>>>> err = dma_fence_add_callback(fence, &sync->ufence->cb,
>>>> user_fence_cb);
>>>> if (err == -ENOENT) {
>>>> - kick_ufence(sync->ufence, fence);
>>>> + /* signal the ufence immediately if fence is already signalled */
>>>> + signal_user_fence(sync->ufence);
>>>> } else if (err) {
>>>> XE_WARN_ON("failed to add user fence");
>>>> user_fence_put(sync->ufence);
next prev parent reply other threads:[~2024-10-18 15:29 UTC|newest]
Thread overview: 15+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-10-18 12:47 [PATCH v3] drm/xe/ufence: Signal ufence immediately when possible Nirmoy Das
2024-10-18 13:34 ` ✓ CI.Patch_applied: success for " Patchwork
2024-10-18 13:34 ` ✓ CI.checkpatch: " Patchwork
2024-10-18 13:35 ` ✓ CI.KUnit: " Patchwork
2024-10-18 13:47 ` ✓ CI.Build: " Patchwork
2024-10-18 13:49 ` ✓ CI.Hooks: " Patchwork
2024-10-18 13:51 ` ✓ CI.checksparse: " Patchwork
2024-10-18 14:16 ` ✗ CI.BAT: failure " Patchwork
2024-10-18 14:23 ` [PATCH v3] " Matthew Auld
2024-10-18 14:40 ` Nirmoy Das
2024-10-18 14:53 ` Matthew Auld
2024-10-18 15:29 ` Nirmoy Das [this message]
2024-10-18 16:16 ` Matthew Brost
2024-10-19 8:30 ` Nirmoy Das
2024-10-19 4:55 ` ✗ CI.FULL: failure for " Patchwork
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=cdddde09-428b-45b5-a058-3a7a8b80bf77@intel.com \
--to=nirmoy.das@intel.com \
--cc=intel-xe@lists.freedesktop.org \
--cc=matthew.auld@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox