public inbox for intel-gfx@lists.freedesktop.org
 help / color / mirror / Atom feed
From: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>
To: Chris Wilson <chris@chris-wilson.co.uk>, Intel-gfx@lists.freedesktop.org
Subject: Re: [PATCH] drm/i915: Do not lie about atomic wait granularity
Date: Mon, 1 Feb 2016 14:28:52 +0000	[thread overview]
Message-ID: <56AF6BA4.7050609@linux.intel.com> (raw)
In-Reply-To: <56AF6873.9010803@linux.intel.com>


On 01/02/16 14:15, Tvrtko Ursulin wrote:
>
> On 01/02/16 13:30, Chris Wilson wrote:
>> On Mon, Feb 01, 2016 at 01:17:35PM +0000, Tvrtko Ursulin wrote:
>>> From: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
>>>
>>> Currently the wait_for_atomic_us only allows for a millisecond
>>> granularity which is not nice towards callers requesting small
>>> micro-second waits.
>>>
>>> Re-implement it so micro-second granularity is really supported
>>> and not just in the name of the macro.
>>>
>>> Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
>>> ---
>>> Danger - this might break things which currently work by accident!
>>> ---
>>>   drivers/gpu/drm/i915/intel_drv.h | 21 ++++++++++++++++++---
>>>   1 file changed, 18 insertions(+), 3 deletions(-)
>>>
>>> diff --git a/drivers/gpu/drm/i915/intel_drv.h
>>> b/drivers/gpu/drm/i915/intel_drv.h
>>> index f620023ed134..9e8a1202194c 100644
>>> --- a/drivers/gpu/drm/i915/intel_drv.h
>>> +++ b/drivers/gpu/drm/i915/intel_drv.h
>>> @@ -63,10 +63,25 @@
>>>       ret__;                                \
>>>   })
>>>
>>> +#define _wait_for_atomic(COND, US) ({ \
>>> +    unsigned long end__; \
>>> +    int ret__ = 0; \
>>> +    get_cpu(); \
>>
>> Hmm, by virtue of its name (and original intent), we are expected to be
>> in an atomic context and could just do a BUG_ON(!in_atomic()) to catch
>> misuse. Since the removal of the panic modeset, all callers outside of
>> intel_uncore.c are definitely abusing this and we would be better to use
>> a usleep[_range]() variant instead.
>
> I considered a WARN_ON_ONCE and a BUILD_BUG_ON for very long waits but
> chickened out on both.
>
> I'll respin with a WARN_ON_ONCE(!in_atomic)) to start with.

Can't really do that it seems since in_atomic() will be always false on 
non-fully-preemptible kernels.

Could do the current cpu comparison trick to catch false timeouts due 
callers from non-atomic sections but not sure if it is worth it. So it 
looks like manual audit of call sites to me.

Or find a time source with micro-second resolution which does not go 
backwards on CPU migrations?

Regards,

Tvrtko
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/intel-gfx

  reply	other threads:[~2016-02-01 14:28 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-02-01 13:17 [PATCH] drm/i915: Do not lie about atomic wait granularity Tvrtko Ursulin
2016-02-01 13:30 ` Chris Wilson
2016-02-01 14:15   ` Tvrtko Ursulin
2016-02-01 14:28     ` Tvrtko Ursulin [this message]
2016-02-01 13:45 ` ✗ Fi.CI.BAT: failure for " Patchwork

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=56AF6BA4.7050609@linux.intel.com \
    --to=tvrtko.ursulin@linux.intel.com \
    --cc=Intel-gfx@lists.freedesktop.org \
    --cc=chris@chris-wilson.co.uk \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox