From: "Christian König" <christian.koenig@amd.com>
To: Maarten Lankhorst <maarten.lankhorst@canonical.com>, airlied@linux.ie
Cc: thellstrom@vmware.com, nouveau@lists.freedesktop.org,
linux-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org,
bskeggs@redhat.com, alexander.deucher@amd.com
Subject: Re: [PATCH 09/19] drm/radeon: handle lockup in delayed work, v2
Date: Mon, 4 Aug 2014 16:37:56 +0200 [thread overview]
Message-ID: <53DF9AC4.3010700@amd.com> (raw)
In-Reply-To: <53DF8BF2.4000309@canonical.com>
> It'a pain to deal with gpu reset.
Yeah, well that's nothing new.
> I've now tried other solutions but that would mean reverting to the old style during gpu lockup recovery, and only running the delayed work when !lockup.
> But this meant that the timeout was useless to add. I think the cleanest is keeping the v2 patch, because potentially any waiting code can be called during lockup recovery.
The lockup code itself should never call any waiting code and V2 doesn't
seem to handle a couple of cases correctly either.
How about moving the fence waiting out of the reset code?
Christian.
Am 04.08.2014 um 15:34 schrieb Maarten Lankhorst:
> Hey,
>
> op 04-08-14 13:57, Christian König schreef:
>> Am 04.08.2014 um 10:55 schrieb Maarten Lankhorst:
>>> op 04-08-14 10:36, Christian König schreef:
>>>> Hi Maarten,
>>>>
>>>> Sorry for the delay. I've got way to much todo recently.
>>>>
>>>> Am 01.08.2014 um 19:46 schrieb Maarten Lankhorst:
>>>>> On 01-08-14 18:35, Christian König wrote:
>>>>>> Am 31.07.2014 um 17:33 schrieb Maarten Lankhorst:
>>>>>>> Signed-off-by: Maarten Lankhorst <maarten.lankhorst@canonical.com>
>>>>>>> ---
>>>>>>> V1 had a nasty bug breaking gpu lockup recovery. The fix is not
>>>>>>> allowing radeon_fence_driver_check_lockup to take exclusive_lock,
>>>>>>> and kill it during lockup recovery instead.
>>>>>> That looks like the delayed work starts running as soon as we submit a fence, and not when it's needed for waiting.
>>>>>>
>>>>>> Since it's a backup for failing IRQs I would rather put it into radeon_irq_kms.c and start/stop it when the IRQs are started/stoped.
>>>>> The delayed work is not just for failing irq's, it's also the handler that's used to detect lockups, which is why I trigger after processing fences, and reset the timer after processing.
>>>> The idea was turning the delayed work on and off when we turn the irq on and off as well, processing of the delayed work handler can still happen in radeon_fence.c
>>>>
>>>>> Specifically what happened was this scenario:
>>>>>
>>>>> - lock up occurs
>>>>> - write lock taken by gpu_reset
>>>>> - delayed work runs, tries to acquire read lock, blocks
>>>>> - gpu_reset tries to cancel delayed work synchronously
>>>>> - has to wait for delayed work to finish -> deadlock
>>>> Why do you want to disable the work item from the lockup handler in the first place?
>>>>
>>>> Just take the exclusive lock in the work item, when it concurrently runs with the lockup handler it will just block for the lockup handler to complete.
>>> With the delayed work radeon_fence_wait no longer handles unreliable interrupts itself, so it has to run from the lockup handler.
>>> But an alternative solution could be adding a radeon_fence_wait_timeout, ignore the timeout and check if fence is signaled on timeout.
>>> This would probably be a cleaner solution.
>> Yeah, agree. Manually specifying a timeout in the fence wait on lockup handling sounds like the best alternative to me.
> It'a pain to deal with gpu reset.
>
> I've now tried other solutions but that would mean reverting to the old style during gpu lockup recovery, and only running the delayed work when !lockup.
> But this meant that the timeout was useless to add. I think the cleanest is keeping the v2 patch, because potentially any waiting code can be called during lockup recovery.
>
> ~Maarten
>
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel
next prev parent reply other threads:[~2014-08-04 14:37 UTC|newest]
Thread overview: 42+ messages / expand[flat|nested] mbox.gz Atom feed top
2014-07-31 15:32 [PATCH 01/19] fence: add debugging lines to fence_is_signaled for the callback Maarten Lankhorst
2014-07-31 15:32 ` [PATCH 02/19] drm/ttm: add interruptible parameter to ttm_eu_reserve_buffers Maarten Lankhorst
2014-07-31 15:33 ` [PATCH 03/19] drm/ttm: kill off some members to ttm_validate_buffer Maarten Lankhorst
2014-07-31 15:33 ` [PATCH 04/19] drm/nouveau: add reservation to nouveau_gem_ioctl_cpu_prep Maarten Lankhorst
2014-07-31 15:33 ` [PATCH 05/19] drm/nouveau: require reservations for nouveau_fence_sync and nouveau_bo_fence Maarten Lankhorst
2014-07-31 15:33 ` [PATCH 06/19] drm/ttm: call ttm_bo_wait while inside a reservation Maarten Lankhorst
2014-07-31 15:33 ` [PATCH 07/19] drm/ttm: kill fence_lock Maarten Lankhorst
2014-07-31 15:33 ` [PATCH 08/19] drm/nouveau: rework to new fence interface Maarten Lankhorst
2014-07-31 15:33 ` [PATCH 09/19] drm/radeon: handle lockup in delayed work, v2 Maarten Lankhorst
2014-08-01 16:35 ` Christian König
2014-08-01 17:46 ` Maarten Lankhorst
2014-08-04 8:36 ` Christian König
[not found] ` <53DF462B.2060102-5C7GfCeVMHo@public.gmane.org>
2014-08-04 8:55 ` Maarten Lankhorst
2014-08-04 11:57 ` Christian König
2014-08-04 13:34 ` Maarten Lankhorst
2014-08-04 14:37 ` Christian König [this message]
2014-08-04 14:40 ` Maarten Lankhorst
2014-08-04 14:45 ` Christian König
2014-08-04 14:58 ` Maarten Lankhorst
2014-08-04 15:04 ` Christian König
2014-08-04 15:09 ` Maarten Lankhorst
2014-08-04 17:04 ` Christian König
[not found] ` <53DFBD2E.5070001-5C7GfCeVMHo@public.gmane.org>
2014-08-05 8:16 ` Daniel Vetter
2014-08-05 9:34 ` Maarten Lankhorst
2014-07-31 15:33 ` [PATCH 10/19] drm/radeon: add timeout argument to radeon_fence_wait_seq Maarten Lankhorst
2014-07-31 15:33 ` [PATCH 11/19] drm/radeon: use common fence implementation for fences, v2 Maarten Lankhorst
2014-07-31 15:34 ` [PATCH 12/19] drm/qxl: rework to new fence interface Maarten Lankhorst
2014-07-31 15:34 ` [PATCH 13/19] drm/vmwgfx: get rid of different types of fence_flags entirely Maarten Lankhorst
2014-07-31 15:34 ` [PATCH 14/19] drm/vmwgfx: rework to new fence interface Maarten Lankhorst
2014-07-31 15:34 ` [PATCH 15/19] drm/ttm: flip the switch, and convert to dma_fence Maarten Lankhorst
2014-07-31 15:34 ` [PATCH 16/19] drm/nouveau: use rcu in nouveau_gem_ioctl_cpu_prep Maarten Lankhorst
2014-07-31 15:34 ` [PATCH 17/19] drm/radeon: use rcu waits in some ioctls Maarten Lankhorst
2014-08-01 8:27 ` Michel Dänzer
2014-08-01 10:12 ` Maarten Lankhorst
2014-08-01 14:13 ` Michel Dänzer
2014-08-01 17:07 ` Maarten Lankhorst
2014-08-04 8:42 ` Michel Dänzer
2014-08-04 8:56 ` Maarten Lankhorst
2014-08-04 9:25 ` Michel Dänzer
2014-08-04 9:30 ` Maarten Lankhorst
2014-07-31 15:34 ` [PATCH 18/19] drm/vmwgfx: use rcu in vmw_user_dmabuf_synccpu_grab Maarten Lankhorst
2014-07-31 15:34 ` [PATCH 19/19] drm/ttm: use rcu in core ttm Maarten Lankhorst
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=53DF9AC4.3010700@amd.com \
--to=christian.koenig@amd.com \
--cc=airlied@linux.ie \
--cc=alexander.deucher@amd.com \
--cc=bskeggs@redhat.com \
--cc=dri-devel@lists.freedesktop.org \
--cc=linux-kernel@vger.kernel.org \
--cc=maarten.lankhorst@canonical.com \
--cc=nouveau@lists.freedesktop.org \
--cc=thellstrom@vmware.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).