From: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>
To: Chris Wilson <chris@chris-wilson.co.uk>, intel-gfx@lists.freedesktop.org
Subject: Re: [Intel-gfx] [PATCH 09/20] drm/i915/gem: Assign context id for async work
Date: Thu, 9 Jul 2020 12:59:51 +0100 [thread overview]
Message-ID: <0af3b19d-ea9e-9558-ca4a-853070f8662e@linux.intel.com> (raw)
In-Reply-To: <159429284100.22162.194646133366627797@build.alporthouse.com>
On 09/07/2020 12:07, Chris Wilson wrote:
> Quoting Tvrtko Ursulin (2020-07-09 12:01:29)
>>
>> On 08/07/2020 16:36, Chris Wilson wrote:
>>> Quoting Tvrtko Ursulin (2020-07-08 15:24:20)
>>>> And what is the effective behaviour you get with N contexts - emit N
>>>> concurrent operations and for N + 1 block in execbuf?
>>>
>>> Each context defines a timeline. A task is not ready to run until the
>>> task before it in its timeline is completed. So we don't block in
>>> execbuf, the scheduler waits until the request is ready before putting
>>> it into the HW queues -- i.e. the number chain of fences with everything
>>> that entails about ensuring it runs to completion [whether successfully
>>> or not, if not we then rely on the error propagation to limit the damage
>>> and report it back to the user if they kept a fence around to inspect].
>>
>> Okay but what is the benefit of N contexts in this series, before the
>> work is actually spread over ctx async width CPUs? Is there any? If not
>> I would prefer this patch is delayed until the time some actual
>> parallelism is ready to be added.
>
> We currently submit an unbounded amount of work. This patch is added
> along with its user to restrict the amount of work allowed to run in
> parallel, and also is used to [crudely] serialise the multiple threads
> attempting to allocate space in the vm when we completely exhaust that
> address space. We need at least one fence-context id for each user, this
> took the opportunity to generalise that to N ids for each user.
Right, this is what I asked at the beginning - restricting amount of
work run in parallel - does mean there is some "blocking"/serialisation
during execbuf? Or it is all async but then what is restricted?
Regards,
Tvrtko
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
next prev parent reply other threads:[~2020-07-09 12:00 UTC|newest]
Thread overview: 64+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-07-06 6:19 [Intel-gfx] s/obj->mm.lock// Chris Wilson
2020-07-06 6:19 ` [Intel-gfx] [PATCH 01/20] drm/i915: Preallocate stashes for vma page-directories Chris Wilson
2020-07-06 18:15 ` Matthew Auld
2020-07-06 18:20 ` Chris Wilson
2020-07-06 6:19 ` [Intel-gfx] [PATCH 02/20] drm/i915: Switch to object allocations for page directories Chris Wilson
2020-07-06 19:06 ` Matthew Auld
2020-07-06 19:31 ` Chris Wilson
2020-07-06 20:01 ` Chris Wilson
2020-07-06 21:08 ` Chris Wilson
2020-07-06 6:19 ` [Intel-gfx] [PATCH 03/20] drm/i915/gem: Don't drop the timeline lock during execbuf Chris Wilson
2020-07-08 16:54 ` Tvrtko Ursulin
2020-07-08 18:08 ` Chris Wilson
2020-07-09 10:52 ` Tvrtko Ursulin
2020-07-09 10:57 ` Chris Wilson
2020-07-06 6:19 ` [Intel-gfx] [PATCH 04/20] drm/i915/gem: Rename execbuf.bind_link to unbound_link Chris Wilson
2020-07-10 11:26 ` Tvrtko Ursulin
2020-07-06 6:19 ` [Intel-gfx] [PATCH 05/20] drm/i915/gem: Break apart the early i915_vma_pin from execbuf object lookup Chris Wilson
2020-07-10 11:27 ` Tvrtko Ursulin
2020-07-06 6:19 ` [Intel-gfx] [PATCH 06/20] drm/i915/gem: Remove the call for no-evict i915_vma_pin Chris Wilson
2020-07-06 6:19 ` [Intel-gfx] [PATCH 07/20] drm/i915: Add list_for_each_entry_safe_continue_reverse Chris Wilson
2020-07-06 6:19 ` [Intel-gfx] [PATCH 08/20] drm/i915: Always defer fenced work to the worker Chris Wilson
2020-07-08 12:18 ` Tvrtko Ursulin
2020-07-08 12:25 ` Chris Wilson
2020-07-06 6:19 ` [Intel-gfx] [PATCH 09/20] drm/i915/gem: Assign context id for async work Chris Wilson
2020-07-08 12:26 ` Tvrtko Ursulin
2020-07-08 12:42 ` Chris Wilson
2020-07-08 14:24 ` Tvrtko Ursulin
2020-07-08 15:36 ` Chris Wilson
2020-07-09 11:01 ` Tvrtko Ursulin
2020-07-09 11:07 ` Chris Wilson
2020-07-09 11:59 ` Tvrtko Ursulin [this message]
2020-07-09 12:07 ` Chris Wilson
2020-07-13 12:22 ` Tvrtko Ursulin
2020-07-14 14:01 ` Chris Wilson
2020-07-08 12:45 ` Tvrtko Ursulin
2020-07-06 6:19 ` [Intel-gfx] [PATCH 10/20] drm/i915: Export a preallocate variant of i915_active_acquire() Chris Wilson
2020-07-09 14:36 ` Maarten Lankhorst
2020-07-10 12:24 ` Tvrtko Ursulin
2020-07-10 12:32 ` Maarten Lankhorst
2020-07-13 14:29 ` Tvrtko Ursulin
2020-07-06 6:19 ` [Intel-gfx] [PATCH 11/20] drm/i915/gem: Separate the ww_mutex walker into its own list Chris Wilson
2020-07-13 14:53 ` Tvrtko Ursulin
2020-07-14 14:10 ` Chris Wilson
2020-07-06 6:19 ` [Intel-gfx] [PATCH 12/20] drm/i915/gem: Asynchronous GTT unbinding Chris Wilson
2020-07-14 9:02 ` Tvrtko Ursulin
2020-07-14 15:05 ` Chris Wilson
2020-07-06 6:19 ` [Intel-gfx] [PATCH 13/20] drm/i915/gem: Bind the fence async for execbuf Chris Wilson
2020-07-14 12:19 ` Tvrtko Ursulin
2020-07-14 15:21 ` Chris Wilson
2020-07-06 6:19 ` [Intel-gfx] [PATCH 14/20] drm/i915/gem: Include cmdparser in common execbuf pinning Chris Wilson
2020-07-14 12:48 ` Tvrtko Ursulin
2020-07-06 6:19 ` [Intel-gfx] [PATCH 15/20] drm/i915/gem: Include secure batch " Chris Wilson
2020-07-06 6:19 ` [Intel-gfx] [PATCH 16/20] drm/i915/gem: Reintroduce multiple passes for reloc processing Chris Wilson
2020-07-09 15:39 ` Tvrtko Ursulin
2020-07-06 6:19 ` [Intel-gfx] [PATCH 17/20] drm/i915: Add an implementation for i915_gem_ww_ctx locking, v2 Chris Wilson
2020-07-06 6:19 ` [Intel-gfx] [PATCH 18/20] drm/i915/gem: Pull execbuf dma resv under a single critical section Chris Wilson
2020-07-06 6:19 ` [Intel-gfx] [PATCH 19/20] drm/i915/gem: Replace i915_gem_object.mm.mutex with reservation_ww_class Chris Wilson
2020-07-09 14:06 ` Maarten Lankhorst
2020-07-06 6:19 ` [Intel-gfx] [PATCH 20/20] drm/i915: Track i915_vma with its own reference counter Chris Wilson
2020-07-06 6:28 ` [Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for series starting with [01/20] drm/i915: Preallocate stashes for vma page-directories Patchwork
2020-07-06 6:29 ` [Intel-gfx] ✗ Fi.CI.SPARSE: " Patchwork
2020-07-06 6:51 ` [Intel-gfx] ✓ Fi.CI.BAT: success " Patchwork
2020-07-06 7:55 ` [Intel-gfx] ✗ Fi.CI.IGT: failure " Patchwork
2020-07-27 18:53 ` [Intel-gfx] s/obj->mm.lock// Thomas Hellström (Intel)
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=0af3b19d-ea9e-9558-ca4a-853070f8662e@linux.intel.com \
--to=tvrtko.ursulin@linux.intel.com \
--cc=chris@chris-wilson.co.uk \
--cc=intel-gfx@lists.freedesktop.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox