public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Maarten Lankhorst <maarten.lankhorst@canonical.com>
To: "Christian König" <deathsimple@vodafone.de>, airlied@linux.ie
Cc: nouveau@lists.freedesktop.org, linux-kernel@vger.kernel.org,
	dri-devel@lists.freedesktop.org
Subject: Re: [RFC PATCH v1 08/16] drm/radeon: use common fence implementation for fences
Date: Thu, 15 May 2014 17:58:08 +0200	[thread overview]
Message-ID: <5374E410.1080203@canonical.com> (raw)
In-Reply-To: <5374E1B5.2020408@vodafone.de>

op 15-05-14 17:48, Christian König schreef:
> Am 15.05.2014 16:18, schrieb Maarten Lankhorst:
>> op 15-05-14 15:19, Christian König schreef:
>>> Am 15.05.2014 15:04, schrieb Maarten Lankhorst:
>>>> op 15-05-14 11:42, Christian König schreef:
>>>>> Am 15.05.2014 11:38, schrieb Maarten Lankhorst:
>>>>>> op 15-05-14 11:21, Christian König schreef:
>>>>>>> Am 15.05.2014 03:06, schrieb Maarten Lankhorst:
>>>>>>>> op 14-05-14 17:29, Christian König schreef:
>>>>>>>>>> +    /* did fence get signaled after we enabled the sw irq? */
>>>>>>>>>> +    if (atomic64_read(&fence->rdev->fence_drv[fence->ring].last_seq) >= fence->seq) {
>>>>>>>>>> + radeon_irq_kms_sw_irq_put(fence->rdev, fence->ring);
>>>>>>>>>> +        return false;
>>>>>>>>>> +    }
>>>>>>>>>> +
>>>>>>>>>> +    fence->fence_wake.flags = 0;
>>>>>>>>>> +    fence->fence_wake.private = NULL;
>>>>>>>>>> +    fence->fence_wake.func = radeon_fence_check_signaled;
>>>>>>>>>> + __add_wait_queue(&fence->rdev->fence_queue, &fence->fence_wake);
>>>>>>>>>> +    fence_get(f);
>>>>>>>>> That looks like a race condition to me. The fence needs to be added to the wait queue before the check, not after.
>>>>>>>>>
>>>>>>>>> Apart from that the whole approach looks like a really bad idea to me. How for example is lockup detection supposed to happen with this? 
>>>>>>>> It's not a race condition because fence_queue.lock is held when this function is called.
>>>>>>> Ah, I see. That's also the reason why you moved the wake_up_all out of the processing function.
>>>>>> Correct. :-)
>>>>>>>> Lockup's a bit of a weird problem, the changes wouldn't allow core ttm code to handle the lockup any more,
>>>>>>>> but any driver specific wait code would still handle this. I did this by design, because in future patches the wait
>>>>>>>> function may be called from outside of the radeon driver. The official wait function takes a timeout parameter,
>>>>>>>> so lockups wouldn't be fatal if the timeout is set to something like 30*HZ for example, it would still return
>>>>>>>> and report that the function timed out.
>>>>>>> Timeouts help with the detection of the lockup, but not at all with the handling of them.
>>>>>>>
>>>>>>> What we essentially need is a wait callback into the driver that is called in non atomic context without any locks held.
>>>>>>>
>>>>>>> This way we can block for the fence to become signaled with a timeout and can then also initiate the reset handling if necessary.
>>>>>>>
>>>>>>> The way you designed the interface now means that the driver never gets a chance to wait for the hardware to become idle and so never has the opportunity to the reset the whole thing.
>>>>>> You could set up a hangcheck timer like intel does, and end up with a reliable hangcheck detection that doesn't depend on cpu waits. :-) Or override the default wait function and restore the old behavior.
>>>>>
>>>>> Overriding the default wait function sounds better, please implement it this way.
>>>>>
>>>>> Thanks,
>>>>> Christian. 
>>>>
>>>> Does this modification look sane?
>>> Adding the timeout is on my todo list for quite some time as well, so this part makes sense.
>>>
>>>> +static long __radeon_fence_wait(struct fence *f, bool intr, long timeout)
>>>> +{
>>>> +    struct radeon_fence *fence = to_radeon_fence(f);
>>>> +    u64 target_seq[RADEON_NUM_RINGS] = {};
>>>> +
>>>> +    target_seq[fence->ring] = fence->seq;
>>>> +    return radeon_fence_wait_seq_timeout(fence->rdev, target_seq, intr, timeout);
>>>> +}
>>> When this call is comming from outside the radeon driver you need to lock rdev->exclusive_lock here to make sure not to interfere with a possible reset.
>> Ah thanks, I'll add that.
>>
>>>>      .get_timeline_name = radeon_fence_get_timeline_name,
>>>>      .enable_signaling = radeon_fence_enable_signaling,
>>>>      .signaled = __radeon_fence_signaled,
>>> Do we still need those callback when we implemented the wait callback?
>> .get_timeline_name is used for debugging (trace events).
>> .signaled is the non-blocking call to check if the fence is signaled or not.
>> .enable_signaling is used for adding callbacks upon fence completion, the default 'fence_default_wait' uses it, so
>> when it works no separate implementation is needed unless you want to do more than just waiting.
>> It's also used when fence_add_callback is called. i915 can be patched to use it. ;-)
>
> I just meant enable_signaling, the other ones are fine with me. The problem with enable_signaling is that it's called with a spin lock held, so we can't sleep.
>
> While resetting the GPU could be moved out into a timer the problem here is that I can't lock rdev->exclusive_lock in such situations.
>
> This means when i915 would call into radeon to enable signaling for a fence we can't make sure that there is not GPU reset running on another CPU. And touching the IRQ registers while a reset is going on is a really good recipe to lockup the whole system.
If you increase the irq counter on all rings before doing a gpu reset, adjust the state and call sw_irq_put when done this race could never happen. Or am I missing something?

~Maarten


  reply	other threads:[~2014-05-15 15:58 UTC|newest]

Thread overview: 40+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2014-05-14 14:57 [RFC PATCH v1 00/16] Convert all ttm drivers to use the new reservation interface Maarten Lankhorst
2014-05-14 14:57 ` [RFC PATCH v1 01/16] drm/ttm: add interruptible parameter to ttm_eu_reserve_buffers Maarten Lankhorst
2014-05-14 14:57 ` [RFC PATCH v1 02/16] drm/ttm: kill off some members to ttm_validate_buffer Maarten Lankhorst
2014-05-14 14:57 ` [RFC PATCH v1 03/16] drm/nouveau: add reservation to nouveau_gem_ioctl_cpu_prep Maarten Lankhorst
2014-05-14 14:57 ` [RFC PATCH v1 04/16] drm/nouveau: require reservations for nouveau_fence_sync and nouveau_bo_fence Maarten Lankhorst
2014-05-14 14:57 ` [RFC PATCH v1 05/16] drm/ttm: call ttm_bo_wait while inside a reservation Maarten Lankhorst
2014-05-14 14:57 ` [RFC PATCH v1 06/16] drm/ttm: kill fence_lock Maarten Lankhorst
2014-05-14 14:58 ` [RFC PATCH v1 07/16] drm/nouveau: rework to new fence interface Maarten Lankhorst
2014-05-14 14:58 ` [RFC PATCH v1 08/16] drm/radeon: use common fence implementation for fences Maarten Lankhorst
2014-05-14 15:29   ` Christian König
2014-05-15  1:06     ` Maarten Lankhorst
2014-05-15  9:21       ` Christian König
2014-05-15  9:38         ` Maarten Lankhorst
2014-05-15  9:42           ` Christian König
2014-05-15 13:04             ` Maarten Lankhorst
2014-05-15 13:19               ` Christian König
2014-05-15 14:18                 ` Maarten Lankhorst
2014-05-15 15:48                   ` Christian König
2014-05-15 15:58                     ` Maarten Lankhorst [this message]
2014-05-15 16:13                       ` Christian König
2014-05-19  8:00                         ` Maarten Lankhorst
2014-05-19  8:27                           ` Christian König
2014-05-19 10:10                             ` Maarten Lankhorst
2014-05-19 12:30                               ` Christian König
2014-05-19 13:35                                 ` Maarten Lankhorst
2014-05-19 14:25                                   ` Christian König
2014-06-02 10:09                                     ` [RFC PATCH v1.2 " Maarten Lankhorst
2014-06-02 10:45                                       ` Christian König
2014-06-02 13:14                                         ` [RFC PATCH v1.3 08/16 1/2] drm/radeon: add timeout argument to radeon_fence_wait_seq Maarten Lankhorst
2014-06-02 13:27                                           ` Christian König
2014-06-03  7:50                                             ` [RFC PATCH v1.4 " Maarten Lankhorst
2014-06-02 13:16                                         ` [RFC PATCH v1.3 08/16 2/2] drm/radeon: use common fence implementation for fences Maarten Lankhorst
2014-05-14 14:58 ` [RFC PATCH v1 09/16] drm/qxl: rework to new fence interface Maarten Lankhorst
2014-05-14 14:58 ` [RFC PATCH v1 10/16] drm/vmwgfx: get rid of different types of fence_flags entirely Maarten Lankhorst
2014-05-14 14:58 ` [RFC PATCH v1 11/16] drm/vmwgfx: rework to new fence interface Maarten Lankhorst
2014-05-14 14:58 ` [RFC PATCH v1 12/16] drm/ttm: flip the switch, and convert to dma_fence Maarten Lankhorst
2014-05-14 14:58 ` [RFC PATCH v1 13/16] drm/nouveau: use rcu in nouveau_gem_ioctl_cpu_prep Maarten Lankhorst
2014-05-14 14:58 ` [RFC PATCH v1 14/16] drm/radeon: use rcu waits in some ioctls Maarten Lankhorst
2014-05-14 14:58 ` [RFC PATCH v1 15/16] drm/vmwgfx: use rcu in vmw_user_dmabuf_synccpu_grab Maarten Lankhorst
2014-05-14 14:58 ` [RFC PATCH v1 16/16] drm/ttm: use rcu in core ttm Maarten Lankhorst

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=5374E410.1080203@canonical.com \
    --to=maarten.lankhorst@canonical.com \
    --cc=airlied@linux.ie \
    --cc=deathsimple@vodafone.de \
    --cc=dri-devel@lists.freedesktop.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=nouveau@lists.freedesktop.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox