public inbox for intel-gfx@lists.freedesktop.org
 help / color / mirror / Atom feed
From: "Siluvery, Arun" <arun.siluvery@linux.intel.com>
To: Daniel Vetter <daniel@ffwll.ch>
Cc: "intel-gfx@lists.freedesktop.org" <intel-gfx@lists.freedesktop.org>
Subject: Re: [PATCH v2 1/7] drm/i915/gen8: Add infrastructure to initialize WA batch buffers
Date: Mon, 15 Jun 2015 15:14:35 +0100	[thread overview]
Message-ID: <557EDDCB.5080105@linux.intel.com> (raw)
In-Reply-To: <20150615104149.GZ8341@phenom.ffwll.local>

On 15/06/2015 11:41, Daniel Vetter wrote:
> On Thu, Jun 04, 2015 at 03:30:56PM +0100, Siluvery, Arun wrote:
>> On 02/06/2015 19:47, Dave Gordon wrote:
>>> On 02/06/15 19:36, Siluvery, Arun wrote:
>>>> On 01/06/2015 11:22, Daniel, Thomas wrote:
>>>>>>
>>>>>> Indeed, allocating an extra scratch page in the context would simplify
>>>>>> vma/mm management. A trick might be to allocate the scratch page at the
>>>>>> start, then offset the lrc regs etc - that would then be consistent
>>>>>> amongst gen and be easy enough to extend if we need more per-context
>>>>>> scratch space in future.
>>>>>> -Chris
>>>>>
>>>>> Yes, I think we already have another use for more per-context space at
>>>>> the start.  The GuC is planning to do this.  Arun, you probably should
>>>>> work with Alex Dai and Dave Gordon to avoid conflicts here.
>>>>>
>>>>> Thomas.
>>>>>
>>>>
>>>> Thanks for the heads-up Thomas.
>>>> I have discussed with Dave and agreed to share this page;
>>>> GuC probably doesn't need whole page so first half is reserved for it's
>>>> use and second half is used for WA.
>>>>
>>>> I have modified my patches to use context page for applying these WA and
>>>> don't see any issues.
>>>>
>>>> During the discussions Dave proposed another approach. Even though these
>>>> WA are called per context they are only initialized once and not changed
>>>> afterwards, same set of WA are applied for each context so instead of
>>>> adding them in each context, does it make sense to create a separate
>>>> page and share across all contexts? but of course GuC will anyway add a
>>>> new page to context so I might as well share that page.
>>>>
>>>> Chris/Dave, do you see any problems with sharing page with GuC or you
>>>> prefer to allocate a separate page for these WA and share across all
>>>> contexts? Please give your comments.
>>>>
>>>> regards
>>>> Arun
>>>
>>> I think we have to consider which is more future-proof i.e. which is
>>> least likely:
>>> (1) the area shared with the GuC grows (definitely still in flux), or
>>> (2) workarounds need to be context-specific (possible, but unlikely)
>>>
>>> So I'd prefer a single area set up just once to contain the pre- and
>>> post-context restore workaround batches. If necessary, the one area
>>> could contain multiple batches at different offsets, so we could point
>>> different contexts at different (shared) batches as required. I think
>>> they're unlikely to actually need per-context customisation[*], but
>>> there might be a need for different workarounds according to workload
>>> type or privilege level or some other criterion ... ?
>>>
>>> .Dave.
>>>
>>> [*] unless they need per-context memory addresses coded into them?
>>>
>>
>> Considering these WA are initialized only once and not changed afterwards
>> and GuC area probably grows in future which may run into the space used by
>> WA, independent single area setup makes senses.
>> I also checked spec and it is not clear whether any customization is going
>> to be required for different contexts.
>> I have modified patches to setup a single page with WA when default_context
>> is initialized and this is used by all contexts.
>>
>> I will send patches but please let me know if there are any other comments.
>
> Yeah if the wa batches aren't ctx specific, then there's really no need to
> allocate one of them per ctx. One global buffer with all the wa combined
> should really be all we need.
> -Daniel
>
Hi Daniel,

Agree, this is already taken into account in the next revision v3 
(already sent to mailing list).

I can see you are still going through the list but when you get there, 
please let me know if you have any other comments.

regards
Arun

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/intel-gfx

  reply	other threads:[~2015-06-15 14:14 UTC|newest]

Thread overview: 19+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2015-05-29 18:03 [PATCH v2 0/7] Add Per-context WA using WA batch buffers Arun Siluvery
2015-05-29 18:03 ` [PATCH v2 1/7] drm/i915/gen8: Add infrastructure to initialize " Arun Siluvery
2015-05-29 18:16   ` Chris Wilson
2015-06-01 10:01     ` Siluvery, Arun
2015-06-01 10:07       ` Chris Wilson
2015-06-01 10:22         ` Daniel, Thomas
2015-06-02 18:36           ` Siluvery, Arun
2015-06-02 18:47             ` Dave Gordon
2015-06-04 14:30               ` Siluvery, Arun
2015-06-15 10:41                 ` Daniel Vetter
2015-06-15 14:14                   ` Siluvery, Arun [this message]
2015-06-15 10:40     ` Daniel Vetter
2015-05-29 18:03 ` [PATCH v2 2/7] drm/i915/gen8: Re-order init pipe_control in lrc mode Arun Siluvery
2015-05-29 18:03 ` [PATCH v2 3/7] drm/i915/gen8: Enable WA batch buffers during ctx save/restore Arun Siluvery
2015-05-29 18:03 ` [PATCH v2 4/7] drm/i915/gen8: Add WaDisableCtxRestoreArbitration workaround Arun Siluvery
2015-05-29 18:03 ` [PATCH v2 5/7] drm/i915/gen8: Add WaFlushCoherentL3CacheLinesAtContextSwitch workaround Arun Siluvery
2015-05-29 18:03 ` [PATCH v2 6/7] drm/i915/gen8: Add WaClearSlmSpaceAtContextSwitch workaround Arun Siluvery
2015-05-29 18:03 ` [PATCH v2 7/7] drm/i915/gen8: Add WaRsRestoreWithPerCtxtBb workaround Arun Siluvery
2015-05-31 20:47   ` shuang.he

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=557EDDCB.5080105@linux.intel.com \
    --to=arun.siluvery@linux.intel.com \
    --cc=daniel@ffwll.ch \
    --cc=intel-gfx@lists.freedesktop.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox