From: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>
To: Chris Wilson <chris@chris-wilson.co.uk>, intel-gfx@lists.freedesktop.org
Subject: Re: [PATCH 04/11] drm/i915: Make request allocation caches global
Date: Wed, 27 Feb 2019 14:17:25 +0000 [thread overview]
Message-ID: <073dfa68-04df-248e-76e5-413a7453d98b@linux.intel.com> (raw)
In-Reply-To: <155126425296.17933.2382988447768734919@skylake-alporthouse-com>
On 27/02/2019 10:44, Chris Wilson wrote:
> Quoting Tvrtko Ursulin (2019-02-27 10:29:43)
>>
>> On 26/02/2019 10:23, Chris Wilson wrote:
>>> As kmem_caches share the same properties (size, allocation/free behaviour)
>>> for all potential devices, we can use global caches. While this
>>> potential has worse fragmentation behaviour (one can argue that
>>> different devices would have different activity lifetimes, but you can
>>> also argue that activity is temporal across the system) it is the
>>> default behaviour of the system at large to amalgamate matching caches.
>>>
>>> The benefit for us is much reduced pointer dancing along the frequent
>>> allocation paths.
>>>
>>> v2: Defer shrinking until after a global grace period for futureproofing
>>> multiple consumers of the slab caches, similar to the current strategy
>>> for avoiding shrinking too early.
>>
>> I suggested to call i915_globals_park directly from __i915_gem_park for
>> symmetry with how i915_gem_unpark calls i915_globals_unpark.
>> i915_globals has it's own delayed setup so I don't think it benefits
>> from the double indirection courtesy of being called from shrink_caches.
>
> I replied I left that change until a later patch after the final
> conversions. Mostly so that we had a standalone patch to revert if the
> rcu_work turns out badly. In this patch, it was to be the simple
> translation over to global_shrink, except you asked for it to be truly
> global and so we needed another layer of counters.
It's a hard sell I think. Because why even have rcu work now in this
case? You could make i915_globals_park just shrink if active counter
dropped to zero. I don't see a benefit in a temporary asymmetric solution.
Regards,
Tvrtko
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
next prev parent reply other threads:[~2019-02-27 14:17 UTC|newest]
Thread overview: 43+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-02-26 10:23 [PATCH 01/11] drm/i915: Skip scanning for signalers if we are already inflight Chris Wilson
2019-02-26 10:23 ` [PATCH 02/11] drm/i915/execlists: Suppress mere WAIT preemption Chris Wilson
2019-02-28 12:33 ` Tvrtko Ursulin
2019-02-26 10:23 ` [PATCH 03/11] drm/i915/execlists: Suppress redundant preemption Chris Wilson
2019-02-28 13:11 ` Tvrtko Ursulin
2019-03-01 11:31 ` Tvrtko Ursulin
2019-03-01 11:36 ` Chris Wilson
2019-03-01 15:07 ` Tvrtko Ursulin
2019-03-01 15:14 ` Chris Wilson
2019-02-26 10:23 ` [PATCH 04/11] drm/i915: Make request allocation caches global Chris Wilson
2019-02-27 10:29 ` Tvrtko Ursulin
2019-02-27 10:44 ` Chris Wilson
2019-02-27 14:17 ` Tvrtko Ursulin [this message]
2019-02-27 14:43 ` Chris Wilson
2019-02-26 10:23 ` [PATCH 05/11] drm/i915: Introduce i915_timeline.mutex Chris Wilson
2019-02-28 7:43 ` Tvrtko Ursulin
2019-02-28 8:09 ` Chris Wilson
2019-02-26 10:23 ` [PATCH 06/11] drm/i915: Keep timeline HWSP allocated until idle across the system Chris Wilson
2019-02-27 10:44 ` Tvrtko Ursulin
2019-02-27 10:51 ` Chris Wilson
2019-02-27 11:15 ` [PATCH] " Chris Wilson
2019-02-27 14:20 ` Tvrtko Ursulin
2019-02-26 10:24 ` [PATCH 07/11] drm/i915: Compute the global scheduler caps Chris Wilson
2019-02-28 7:45 ` Tvrtko Ursulin
2019-02-26 10:24 ` [PATCH 08/11] drm/i915: Use HW semaphores for inter-engine synchronisation on gen8+ Chris Wilson
2019-02-28 10:49 ` Tvrtko Ursulin
2019-02-26 10:24 ` [PATCH 09/11] drm/i915: Prioritise non-busywait semaphore workloads Chris Wilson
2019-02-26 10:24 ` [PATCH 10/11] drm/i915/execlists: Skip direct submission if only lite-restore Chris Wilson
2019-02-28 13:20 ` Tvrtko Ursulin
2019-03-01 10:22 ` Chris Wilson
2019-03-01 10:27 ` Chris Wilson
2019-02-26 10:24 ` [PATCH 11/11] drm/i915: Use __ffs() in for_each_priolist for more compact code Chris Wilson
2019-02-28 7:42 ` Tvrtko Ursulin
2019-02-26 10:56 ` ✗ Fi.CI.BAT: failure for series starting with [01/11] drm/i915: Skip scanning for signalers if we are already inflight Patchwork
2019-02-26 11:26 ` ✗ Fi.CI.CHECKPATCH: warning for series starting with [01/11] drm/i915: Skip scanning for signalers if we are already inflight (rev2) Patchwork
2019-02-26 11:31 ` ✗ Fi.CI.SPARSE: " Patchwork
2019-02-26 11:51 ` ✓ Fi.CI.BAT: success " Patchwork
2019-02-26 15:17 ` ✓ Fi.CI.IGT: " Patchwork
2019-02-27 10:19 ` [PATCH 01/11] drm/i915: Skip scanning for signalers if we are already inflight Tvrtko Ursulin
2019-02-27 11:38 ` ✗ Fi.CI.CHECKPATCH: warning for series starting with [01/11] drm/i915: Skip scanning for signalers if we are already inflight (rev3) Patchwork
2019-02-27 11:42 ` ✗ Fi.CI.SPARSE: " Patchwork
2019-02-27 12:03 ` ✓ Fi.CI.BAT: success " Patchwork
2019-02-27 13:58 ` ✓ Fi.CI.IGT: " Patchwork
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=073dfa68-04df-248e-76e5-413a7453d98b@linux.intel.com \
--to=tvrtko.ursulin@linux.intel.com \
--cc=chris@chris-wilson.co.uk \
--cc=intel-gfx@lists.freedesktop.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox