From: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
To: Chris Wilson <chris@chris-wilson.co.uk>, intel-gfx@lists.freedesktop.org
Subject: Re: [PATCH 12/13] drm/i915: Async GPU relocation processing
Date: Mon, 03 Apr 2017 16:54:55 +0300 [thread overview]
Message-ID: <1491227695.3247.14.camel@linux.intel.com> (raw)
In-Reply-To: <20170329155635.19060-13-chris@chris-wilson.co.uk>
On ke, 2017-03-29 at 16:56 +0100, Chris Wilson wrote:
> If the user requires patching of their batch or auxiliary buffers, we
> currently make the alterations on the cpu. If they are active on the GPU
> at the time, we wait under the struct_mutex for them to finish executing
> before we rewrite the contents. This happens if shared relocation trees
> are used between different contexts with separate address space (and the
> buffers then have different addresses in each), the 3D state will need
> to be adjusted between execution on each context. However, we don't need
> to use the CPU to do the relocation patching, as we could queue commands
> to the GPU to perform it and use fences to serialise the operation with
> the current activity and future - so the operation on the GPU appears
> just as atomic as performing it immediately. Performing the relocation
> rewrites on the GPU is not free, in terms of pure throughput, the number
> of relocations/s is about halved - but more importantly so is the time
> under the struct_mutex.
>
> v2: Break out the request/batch allocation for clearer error flow.
>
> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
<SNIP>
> static void reloc_cache_reset(struct reloc_cache *cache)
> {
> void *vaddr;
>
> + if (cache->rq)
> + reloc_gpu_flush(cache);
An odd place to do the flush, I was expecting GEM_BUG_ON(cache->rq);
The instruction generation I've gone through in one spot in the code,
no intention going over it more times.
Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Regards, Joonas
--
Joonas Lahtinen
Open Source Technology Center
Intel Corporation
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
next prev parent reply other threads:[~2017-04-03 13:55 UTC|newest]
Thread overview: 28+ messages / expand[flat|nested] mbox.gz Atom feed top
2017-03-29 15:56 Another week, another eb bomb Chris Wilson
2017-03-29 15:56 ` [PATCH 01/13] drm/i915: Reinstate reservation_object zapping for batch_pool objects Chris Wilson
2017-03-29 15:56 ` [PATCH 02/13] drm/i915: Copy user requested buffers into the error state Chris Wilson
2017-04-02 0:48 ` Matt Turner
2017-04-02 8:51 ` Chris Wilson
2017-04-12 21:43 ` Chris Wilson
2017-04-15 4:49 ` Matt Turner
2017-04-15 11:42 ` Chris Wilson
2017-03-29 15:56 ` [PATCH 03/13] drm/i915: Amalgamate execbuffer parameter structures Chris Wilson
2017-03-29 15:56 ` [PATCH 04/13] drm/i915: Use vma->exec_entry as our double-entry placeholder Chris Wilson
2017-03-31 9:29 ` Joonas Lahtinen
2017-04-10 10:30 ` Chris Wilson
2017-03-29 15:56 ` [PATCH 05/13] drm/i915: Split vma exec_link/evict_link Chris Wilson
2017-03-29 15:56 ` [PATCH 06/13] drm/i915: Store a direct lookup from object handle to vma Chris Wilson
2017-03-31 9:56 ` Joonas Lahtinen
2017-03-29 15:56 ` [PATCH 07/13] drm/i915: Pass vma to relocate entry Chris Wilson
2017-03-29 15:56 ` [PATCH 08/13] drm/i915: Eliminate lots of iterations over the execobjects array Chris Wilson
2017-04-04 14:57 ` Joonas Lahtinen
2017-04-10 12:17 ` Chris Wilson
2017-04-11 20:45 ` [PATCH v4] " Chris Wilson
2017-03-29 15:56 ` [PATCH 09/13] drm/i915: First try the previous execbuffer location Chris Wilson
2017-03-29 15:56 ` [PATCH 10/13] drm/i915: Wait upon userptr get-user-pages within execbuffer Chris Wilson
2017-03-29 15:56 ` [PATCH 11/13] drm/i915: Allow execbuffer to use the first object as the batch Chris Wilson
2017-03-29 15:56 ` [PATCH 12/13] drm/i915: Async GPU relocation processing Chris Wilson
2017-04-03 13:54 ` Joonas Lahtinen [this message]
2017-03-29 15:56 ` [PATCH 13/13] drm/i915/scheduler: Support user-defined priorities Chris Wilson
2017-03-29 16:17 ` ✓ Fi.CI.BAT: success for series starting with [01/13] drm/i915: Reinstate reservation_object zapping for batch_pool objects Patchwork
2017-04-11 20:47 ` ✗ Fi.CI.BAT: failure for series starting with [01/13] drm/i915: Reinstate reservation_object zapping for batch_pool objects (rev2) Patchwork
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1491227695.3247.14.camel@linux.intel.com \
--to=joonas.lahtinen@linux.intel.com \
--cc=chris@chris-wilson.co.uk \
--cc=intel-gfx@lists.freedesktop.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).