Intel-XE Archive on lore.kernel.org
 help / color / mirror / Atom feed
From: Nirmoy Das <nirmoy.das@intel.com>
To: John Harrison <john.c.harrison@intel.com>,
	<intel-xe@lists.freedesktop.org>
Cc: Badal Nilawar <badal.nilawar@intel.com>,
	Matthew Auld <matthew.auld@intel.com>,
	Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>,
	Lucas De Marchi <lucas.demarchi@intel.com>,
	<stable@vger.kernel.org>, Matthew Brost <matthew.brost@intel.com>
Subject: Re: [PATCH v3] drm/xe/ufence: Flush xe ordered_wq in case of ufence timeout
Date: Mon, 28 Oct 2024 18:44:27 +0100	[thread overview]
Message-ID: <ce7ba88b-0624-40ab-a4e4-c2c5f19b2095@intel.com> (raw)
In-Reply-To: <d9befc74-b9b5-48c6-851f-8163b356edc7@intel.com>


On 10/28/2024 6:03 PM, John Harrison wrote:
> On 10/28/2024 04:49, Nirmoy Das wrote:
>> Flush xe ordered_wq in case of ufence timeout which is observed
>> on LNL and that points to recent scheduling issue with E-cores.
>>
>> This is similar to the recent fix:
>> commit e51527233804 ("drm/xe/guc/ct: Flush g2h worker in case of g2h
>> response timeout") and should be removed once there is a E-core
>> scheduling fix for LNL.
>>
>> v2: Add platform check(Himal)
>>      s/__flush_workqueue/flush_workqueue(Jani)
>> v3: Remove gfx platform check as the issue related to cpu
>>      platform(John)
>>
>> Cc: Badal Nilawar <badal.nilawar@intel.com>
>> Cc: Matthew Auld <matthew.auld@intel.com>
>> Cc: John Harrison <John.C.Harrison@Intel.com>
>> Cc: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
>> Cc: Lucas De Marchi <lucas.demarchi@intel.com>
>> Cc: <stable@vger.kernel.org> # v6.11+
>> Link: https://gitlab.freedesktop.org/drm/xe/kernel/-/issues/2754
>> Suggested-by: Matthew Brost <matthew.brost@intel.com>
>> Signed-off-by: Nirmoy Das <nirmoy.das@intel.com>
>> Reviewed-by: Matthew Brost <matthew.brost@intel.com>
>> ---
>>   drivers/gpu/drm/xe/xe_wait_user_fence.c | 11 +++++++++++
>>   1 file changed, 11 insertions(+)
>>
>> diff --git a/drivers/gpu/drm/xe/xe_wait_user_fence.c b/drivers/gpu/drm/xe/xe_wait_user_fence.c
>> index f5deb81eba01..886c9862d89c 100644
>> --- a/drivers/gpu/drm/xe/xe_wait_user_fence.c
>> +++ b/drivers/gpu/drm/xe/xe_wait_user_fence.c
>> @@ -155,6 +155,17 @@ int xe_wait_user_fence_ioctl(struct drm_device *dev, void *data,
>>           }
>>             if (!timeout) {
>> +            /*
>> +             * This is analogous to e51527233804 ("drm/xe/guc/ct: Flush g2h worker
>> +             * in case of g2h response timeout")
>> +             *
>> +             * TODO: Drop this change once workqueue scheduling delay issue is
>> +             * fixed on LNL Hybrid CPU.
>> +             */
>> +            flush_workqueue(xe->ordered_wq);
> I thought the plan was to make this a trackable macro used by all instances of this w/a - LNL_FLUSH_WORK|WORKQUEUE()? With a single, complete description of the problem attached to the macro rather than 'this is similar to' comments scattered through the code.
>
> There was also a request to add a dmesg print if the failing condition was met after doing the flush.
>

I will resend. I misunderstood the last conversation.

> John.
>
>> +            err = do_compare(addr, args->value, args->mask, args->op);
>> +            if (err <= 0)
>> +                break;
>>               err = -ETIME;
>>               break;
>>           }
>

  reply	other threads:[~2024-10-28 17:44 UTC|newest]

Thread overview: 20+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-10-28 11:49 [PATCH v3] drm/xe/ufence: Flush xe ordered_wq in case of ufence timeout Nirmoy Das
2024-10-28 14:13 ` ✓ CI.Patch_applied: success for drm/xe/ufence: Flush xe ordered_wq in case of ufence timeout (rev3) Patchwork
2024-10-28 14:14 ` ✗ CI.checkpatch: warning " Patchwork
2024-10-28 14:15 ` ✓ CI.KUnit: success " Patchwork
2024-10-28 14:26 ` ✓ CI.Build: " Patchwork
2024-10-28 14:29 ` ✓ CI.Hooks: " Patchwork
2024-10-28 14:30 ` ✓ CI.checksparse: " Patchwork
2024-10-28 14:55 ` ✗ CI.BAT: failure " Patchwork
2024-10-28 15:44 ` ✗ CI.FULL: " Patchwork
2024-10-28 16:14 ` ✓ CI.Patch_applied: success for drm/xe/ufence: Flush xe ordered_wq in case of ufence timeout (rev4) Patchwork
2024-10-28 16:14 ` ✗ CI.checkpatch: warning " Patchwork
2024-10-28 16:15 ` ✓ CI.KUnit: success " Patchwork
2024-10-28 16:27 ` ✓ CI.Build: " Patchwork
2024-10-28 16:29 ` ✓ CI.Hooks: " Patchwork
2024-10-28 16:31 ` ✓ CI.checksparse: " Patchwork
2024-10-28 17:00 ` ✓ CI.BAT: " Patchwork
2024-10-28 17:03 ` [PATCH v3] drm/xe/ufence: Flush xe ordered_wq in case of ufence timeout John Harrison
2024-10-28 17:44   ` Nirmoy Das [this message]
2024-10-28 17:58 ` ✗ CI.FULL: failure for drm/xe/ufence: Flush xe ordered_wq in case of ufence timeout (rev4) Patchwork
2024-10-29 11:56 ` Patchwork

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=ce7ba88b-0624-40ab-a4e4-c2c5f19b2095@intel.com \
    --to=nirmoy.das@intel.com \
    --cc=badal.nilawar@intel.com \
    --cc=himal.prasad.ghimiray@intel.com \
    --cc=intel-xe@lists.freedesktop.org \
    --cc=john.c.harrison@intel.com \
    --cc=lucas.demarchi@intel.com \
    --cc=matthew.auld@intel.com \
    --cc=matthew.brost@intel.com \
    --cc=stable@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox