From: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>
To: Chris Wilson <chris@chris-wilson.co.uk>, intel-gfx@lists.freedesktop.org
Subject: Re: [Intel-gfx] [PATCH 03/20] drm/i915/gem: Don't drop the timeline lock during execbuf
Date: Thu, 9 Jul 2020 11:52:19 +0100 [thread overview]
Message-ID: <db44ae52-dad5-78c9-bae5-2b9805db4790@linux.intel.com> (raw)
In-Reply-To: <159423173830.30287.17971074477427255070@build.alporthouse.com>
On 08/07/2020 19:08, Chris Wilson wrote:
> Quoting Tvrtko Ursulin (2020-07-08 17:54:51)
>>
>> On 06/07/2020 07:19, Chris Wilson wrote:
>>> @@ -662,18 +692,22 @@ static int eb_reserve(struct i915_execbuffer *eb)
>>> * room for the earlier objects *unless* we need to defragment.
>>> */
>>>
>>> - if (mutex_lock_interruptible(&eb->i915->drm.struct_mutex))
>>> - return -EINTR;
>>> -
>>> pass = 0;
>>> do {
>>> + int err = 0;
>>> +
>>> + if (mutex_lock_interruptible(&eb->i915->drm.struct_mutex))
>>> + return -EINTR;
>>
>> Recently you explained to me why we still use struct mutex here so
>> maybe, while moving the code, document that in a comment.
>
> Part of the work here is to eliminate the need for the struct_mutex,
> that will be replaced by not dropping the vm->mutex while binding
> multiple vma.
>
> It's the interaction with the waits to flush other vm users when under
> pressure that are the most annoying. This area is not straightforward,
> and at least deserves some comments so that the thinking behind it can
> be fixed.
>
>>> +static struct i915_request *
>>> +nested_request_create(struct intel_context *ce)
>>> +{
>>> + struct i915_request *rq;
>>> +
>>> + /* XXX This only works once; replace with shared timeline */
>>
>> Once as in attempt to use the same local intel_context from another eb
>> would upset lockdep? It's not a problem I think.
>
> "Once" as in this is the only time we can do this nested locking between
> engines of the same context in the whole driver, or else lockdep would
> have been right to complain. [i.e. if we ever do the reserve nesting, we
> are screwed.]
>
> Fwiw, I have posted patches that will eliminate the need for a nested
> timeline here :)
In this series or just on the mailing list?
>
>>> + mutex_lock_nested(&ce->timeline->mutex, SINGLE_DEPTH_NESTING);
>>> + intel_context_enter(ce);
>
>
>>> static int __eb_pin_engine(struct i915_execbuffer *eb, struct intel_context *ce)
>>> {
>>> struct intel_timeline *tl;
>>> @@ -2087,9 +2174,7 @@ static int __eb_pin_engine(struct i915_execbuffer *eb, struct intel_context *ce)
>>> intel_context_enter(ce);
>>> rq = eb_throttle(ce);
>>>
>>> - intel_context_timeline_unlock(tl);
>>> -
>>> - if (rq) {
>>> + while (rq) {
>>> bool nonblock = eb->file->filp->f_flags & O_NONBLOCK;
>>> long timeout;
>>>
>>> @@ -2097,23 +2182,34 @@ static int __eb_pin_engine(struct i915_execbuffer *eb, struct intel_context *ce)
>>> if (nonblock)
>>> timeout = 0;
>>>
>>> + mutex_unlock(&tl->mutex);
>>
>> "Don't drop the timeline lock during execbuf"? Is the "during execbuf"
>> actually a smaller subset
>
> We are before execbuf in my book :)
>
> This is throttle the hog before we start, and reserve enough space in
> the ring (we make sure there's a page, or thereabouts) to build a batch
> without interruption.
Ok. :)
Regards,
Tvrtko
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
next prev parent reply other threads:[~2020-07-09 10:52 UTC|newest]
Thread overview: 64+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-07-06 6:19 [Intel-gfx] s/obj->mm.lock// Chris Wilson
2020-07-06 6:19 ` [Intel-gfx] [PATCH 01/20] drm/i915: Preallocate stashes for vma page-directories Chris Wilson
2020-07-06 18:15 ` Matthew Auld
2020-07-06 18:20 ` Chris Wilson
2020-07-06 6:19 ` [Intel-gfx] [PATCH 02/20] drm/i915: Switch to object allocations for page directories Chris Wilson
2020-07-06 19:06 ` Matthew Auld
2020-07-06 19:31 ` Chris Wilson
2020-07-06 20:01 ` Chris Wilson
2020-07-06 21:08 ` Chris Wilson
2020-07-06 6:19 ` [Intel-gfx] [PATCH 03/20] drm/i915/gem: Don't drop the timeline lock during execbuf Chris Wilson
2020-07-08 16:54 ` Tvrtko Ursulin
2020-07-08 18:08 ` Chris Wilson
2020-07-09 10:52 ` Tvrtko Ursulin [this message]
2020-07-09 10:57 ` Chris Wilson
2020-07-06 6:19 ` [Intel-gfx] [PATCH 04/20] drm/i915/gem: Rename execbuf.bind_link to unbound_link Chris Wilson
2020-07-10 11:26 ` Tvrtko Ursulin
2020-07-06 6:19 ` [Intel-gfx] [PATCH 05/20] drm/i915/gem: Break apart the early i915_vma_pin from execbuf object lookup Chris Wilson
2020-07-10 11:27 ` Tvrtko Ursulin
2020-07-06 6:19 ` [Intel-gfx] [PATCH 06/20] drm/i915/gem: Remove the call for no-evict i915_vma_pin Chris Wilson
2020-07-06 6:19 ` [Intel-gfx] [PATCH 07/20] drm/i915: Add list_for_each_entry_safe_continue_reverse Chris Wilson
2020-07-06 6:19 ` [Intel-gfx] [PATCH 08/20] drm/i915: Always defer fenced work to the worker Chris Wilson
2020-07-08 12:18 ` Tvrtko Ursulin
2020-07-08 12:25 ` Chris Wilson
2020-07-06 6:19 ` [Intel-gfx] [PATCH 09/20] drm/i915/gem: Assign context id for async work Chris Wilson
2020-07-08 12:26 ` Tvrtko Ursulin
2020-07-08 12:42 ` Chris Wilson
2020-07-08 14:24 ` Tvrtko Ursulin
2020-07-08 15:36 ` Chris Wilson
2020-07-09 11:01 ` Tvrtko Ursulin
2020-07-09 11:07 ` Chris Wilson
2020-07-09 11:59 ` Tvrtko Ursulin
2020-07-09 12:07 ` Chris Wilson
2020-07-13 12:22 ` Tvrtko Ursulin
2020-07-14 14:01 ` Chris Wilson
2020-07-08 12:45 ` Tvrtko Ursulin
2020-07-06 6:19 ` [Intel-gfx] [PATCH 10/20] drm/i915: Export a preallocate variant of i915_active_acquire() Chris Wilson
2020-07-09 14:36 ` Maarten Lankhorst
2020-07-10 12:24 ` Tvrtko Ursulin
2020-07-10 12:32 ` Maarten Lankhorst
2020-07-13 14:29 ` Tvrtko Ursulin
2020-07-06 6:19 ` [Intel-gfx] [PATCH 11/20] drm/i915/gem: Separate the ww_mutex walker into its own list Chris Wilson
2020-07-13 14:53 ` Tvrtko Ursulin
2020-07-14 14:10 ` Chris Wilson
2020-07-06 6:19 ` [Intel-gfx] [PATCH 12/20] drm/i915/gem: Asynchronous GTT unbinding Chris Wilson
2020-07-14 9:02 ` Tvrtko Ursulin
2020-07-14 15:05 ` Chris Wilson
2020-07-06 6:19 ` [Intel-gfx] [PATCH 13/20] drm/i915/gem: Bind the fence async for execbuf Chris Wilson
2020-07-14 12:19 ` Tvrtko Ursulin
2020-07-14 15:21 ` Chris Wilson
2020-07-06 6:19 ` [Intel-gfx] [PATCH 14/20] drm/i915/gem: Include cmdparser in common execbuf pinning Chris Wilson
2020-07-14 12:48 ` Tvrtko Ursulin
2020-07-06 6:19 ` [Intel-gfx] [PATCH 15/20] drm/i915/gem: Include secure batch " Chris Wilson
2020-07-06 6:19 ` [Intel-gfx] [PATCH 16/20] drm/i915/gem: Reintroduce multiple passes for reloc processing Chris Wilson
2020-07-09 15:39 ` Tvrtko Ursulin
2020-07-06 6:19 ` [Intel-gfx] [PATCH 17/20] drm/i915: Add an implementation for i915_gem_ww_ctx locking, v2 Chris Wilson
2020-07-06 6:19 ` [Intel-gfx] [PATCH 18/20] drm/i915/gem: Pull execbuf dma resv under a single critical section Chris Wilson
2020-07-06 6:19 ` [Intel-gfx] [PATCH 19/20] drm/i915/gem: Replace i915_gem_object.mm.mutex with reservation_ww_class Chris Wilson
2020-07-09 14:06 ` Maarten Lankhorst
2020-07-06 6:19 ` [Intel-gfx] [PATCH 20/20] drm/i915: Track i915_vma with its own reference counter Chris Wilson
2020-07-06 6:28 ` [Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for series starting with [01/20] drm/i915: Preallocate stashes for vma page-directories Patchwork
2020-07-06 6:29 ` [Intel-gfx] ✗ Fi.CI.SPARSE: " Patchwork
2020-07-06 6:51 ` [Intel-gfx] ✓ Fi.CI.BAT: success " Patchwork
2020-07-06 7:55 ` [Intel-gfx] ✗ Fi.CI.IGT: failure " Patchwork
2020-07-27 18:53 ` [Intel-gfx] s/obj->mm.lock// Thomas Hellström (Intel)
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=db44ae52-dad5-78c9-bae5-2b9805db4790@linux.intel.com \
--to=tvrtko.ursulin@linux.intel.com \
--cc=chris@chris-wilson.co.uk \
--cc=intel-gfx@lists.freedesktop.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox