intel-gfx.lists.freedesktop.org archive mirror
 help / color / mirror / Atom feed
From: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>
To: Chris Wilson <chris@chris-wilson.co.uk>, intel-gfx@lists.freedesktop.org
Subject: Re: [CI 06/20] drm/i915: Slaughter the thundering i915_wait_request herd
Date: Mon, 6 Jun 2016 12:04:35 +0100	[thread overview]
Message-ID: <575558C3.4070700@linux.intel.com> (raw)
In-Reply-To: <20160606101437.GD23629@nuc-i3427.alporthouse.com>


On 06/06/16 11:14, Chris Wilson wrote:
> On Mon, May 23, 2016 at 09:53:40AM +0100, Tvrtko Ursulin wrote:
>>
>> On 20/05/16 13:19, Chris Wilson wrote:
>>> On Fri, May 20, 2016 at 01:04:13PM +0100, Tvrtko Ursulin wrote:
>>>>> +	p = &b->waiters.rb_node;
>>>>> +	while (*p) {
>>>>> +		parent = *p;
>>>>> +		if (wait->seqno == to_wait(parent)->seqno) {
>>>>> +			/* We have multiple waiters on the same seqno, select
>>>>> +			 * the highest priority task (that with the smallest
>>>>> +			 * task->prio) to serve as the bottom-half for this
>>>>> +			 * group.
>>>>> +			 */
>>>>> +			if (wait->task->prio > to_wait(parent)->task->prio) {
>>>>> +				p = &parent->rb_right;
>>>>> +				first = false;
>>>>> +			} else
>>>>> +				p = &parent->rb_left;
>>>>> +		} else if (i915_seqno_passed(wait->seqno,
>>>>> +					     to_wait(parent)->seqno)) {
>>>>> +			p = &parent->rb_right;
>>>>> +			if (i915_seqno_passed(seqno, to_wait(parent)->seqno))
>>>>> +				completed = parent;
>>>>
>>>> Hm don't you need the completed handling in the equal seqno case as well?
>>>
>>> i915_seqno_passed() returnts true if seqnoA == seqnoB
>>
>> I meant in the first branch, multiple waiter on the same seqno?
>
> It could be applied there as well. It is just a bit hairer to keep the
> rbtree and call semantics intact. Hmm, for ourselves it is probably
> better to early return and late the pending wakeup handle the rbtree.

Did not understand this, especially the last sentence. But thats fine, I 
forgot what I meant there anyway after two weeks. :)

>>>>> +void intel_engine_remove_wait(struct intel_engine_cs *engine,
>>>>> +			      struct intel_wait *wait)
>>>>> +{
>>>>> +	struct intel_breadcrumbs *b = &engine->breadcrumbs;
>>>>> +
>>>>> +	/* Quick check to see if this waiter was already decoupled from
>>>>> +	 * the tree by the bottom-half to avoid contention on the spinlock
>>>>> +	 * by the herd.
>>>>> +	 */
>>>>> +	if (RB_EMPTY_NODE(&wait->node))
>>>>> +		return;
>>>>> +
>>>>> +	spin_lock(&b->lock);
>>>>> +
>>>>> +	if (b->first_waiter == wait->task) {
>>>>> +		struct rb_node *next;
>>>>> +		struct task_struct *task;
>>>>> +		const int priority = wait->task->prio;
>>>>> +
>>>>> +		/* We are the current bottom-half. Find the next candidate,
>>>>> +		 * the first waiter in the queue on the remaining oldest
>>>>> +		 * request. As multiple seqnos may complete in the time it
>>>>> +		 * takes us to wake up and find the next waiter, we have to
>>>>> +		 * wake up that waiter for it to perform its own coherent
>>>>> +		 * completion check.
>>>>> +		 */
>>>>> +		next = rb_next(&wait->node);
>>>>> +		if (chain_wakeup(next, priority)) {
>>>>
>>>> Don't get this, next waiter my be a different seqno so how is the
>>>> priority check relevant?
>>>
>>> We only want to call try_to_wake_up() on processes at least as important
>>> as us.
>>
>> But what is the comment saying about having to wake up the following
>> waiter, because of multiple seqnos potentially completing? It says
>> that and then it may not wake up anyone depending on relative
>> priorities.
>
> It does, it always wakes the next bottom half.
>
>>>> Also, how can the next node be smaller priority anyway, if equal
>>>> seqno if has be equal or greater I thought?
>>>
>>> Next seqno can have a smaller priority (and be complete). chain_wakeup()
>>> just asks if the next waiter is as important as ourselves (the next
>>> waiter may be for a different seqno).
>>>
>>>> Then since add_waiter will wake up one extra waiter to handle the
>>>> missed irq case, here you may skip checking them based simply on
>>>> task priority which seems wrong.
>>>
>>> Correct. They will be handled by the next waiter in the qeueue, not us.
>>> Our job is to wake as many completed (+ the potentially completed bh) as
>>> possible without incurring undue delay for ourselves. All completed waiters
>>> will be woken in turn as the next bh to run will look at the list and
>>> wake up those at the the same priority as itself (+ the next potentially
>>> completed bh).
>>
>> Who will wake up the next waiter? add_waiter will wake up one, and
>> then the next waiter here (in remove_waiter) may not wake up any
>> further based on priority.
>
> We do. We wake up all waiters that are complete at the same priority or
> better than us, plus the next waiter to take over as the bottom half.

Oh right, the if (next) branch.

>> Is the assumption that it is only possible to miss one interrupt?
>>
>>>>> +static inline bool intel_engine_wakeup(struct intel_engine_cs *engine)
>>>>> +{
>>>>> +	bool wakeup = false;
>>>>> +	struct task_struct *task = READ_ONCE(engine->breadcrumbs.first_waiter);
>>>>> +	/* Note that for this not to dangerously chase a dangling pointer,
>>>>> +	 * the caller is responsible for ensure that the task remain valid for
>>>>> +	 * wake_up_process() i.e. that the RCU grace period cannot expire.
>>>>> +	 */
>>>>
>>>> This gets called from hard irq context and I did not manage to
>>>> figure out what makes it safe in the remove waiter path? Why the
>>>> hard irq couldn't not sample NULL here?
>>>
>>> Because we only need RCU access to task, which is provided by the hard
>>> irq context.
>>
>> What does the comment mean by saying callers are responsible for the
>> RCU period? From which point to which? I still can't tie whatever
>> callers might be doing with the unrelated hardirq.
>
> Ensuring that the preempt count is raised to prevent the RCU grace
> period from expiring between the read of task and the *task, i.e.
> rcu_read_lock() or other means.

Yes I agree it looks fine on second look. It is only reading the 
engine->breadcrumbs.tasklet once and that one will remain valid until 
the task exists plus a grace period.

Regards,

Tvrtko



_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

  reply	other threads:[~2016-06-06 11:04 UTC|newest]

Thread overview: 40+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-05-19 11:32 [CI 01/20] drm: Restore double clflush on the last partial cacheline Chris Wilson
2016-05-19 11:32 ` [CI 02/20] drm/i915/shrinker: Flush active on objects before counting Chris Wilson
2016-05-19 12:14   ` Tvrtko Ursulin
2016-05-19 11:32 ` [CI 03/20] drm/i915: Delay queuing hangcheck to wait-request Chris Wilson
2016-05-19 12:34   ` Tvrtko Ursulin
2016-05-19 12:52     ` Chris Wilson
2016-05-19 11:32 ` [CI 04/20] drm/i915: Remove the dedicated hangcheck workqueue Chris Wilson
2016-05-19 12:50   ` Tvrtko Ursulin
2016-05-19 13:13     ` Chris Wilson
2016-05-20 12:07       ` Tvrtko Ursulin
2016-05-20 12:23         ` Chris Wilson
2016-05-23  8:55           ` Tvrtko Ursulin
2016-05-19 11:32 ` [CI 05/20] drm/i915: Make queueing the hangcheck work inline Chris Wilson
2016-05-19 12:53   ` Tvrtko Ursulin
2016-05-19 13:18     ` Chris Wilson
2016-05-19 11:32 ` [CI 06/20] drm/i915: Slaughter the thundering i915_wait_request herd Chris Wilson
2016-05-20 12:04   ` Tvrtko Ursulin
2016-05-20 12:19     ` Chris Wilson
2016-05-23  8:53       ` Tvrtko Ursulin
2016-06-06 10:14         ` Chris Wilson
2016-06-06 11:04           ` Tvrtko Ursulin [this message]
2016-05-19 11:32 ` [CI 07/20] drm/i915: Remove the lazy_coherency parameter from request-completed? Chris Wilson
2016-05-19 11:32 ` [CI 08/20] drm/i915: Use HWS for seqno tracking everywhere Chris Wilson
2016-05-19 11:32 ` [CI 09/20] drm/i915: Stop mapping the scratch page into CPU space Chris Wilson
2016-05-19 11:32 ` [CI 10/20] drm/i915: Allocate scratch page from stolen Chris Wilson
2016-05-19 11:32 ` [CI 11/20] drm/i915: Refactor scratch object allocation for gen2 w/a buffer Chris Wilson
2016-05-19 11:32 ` [CI 12/20] drm/i915: Add a delay between interrupt and inspecting the final seqno (ilk) Chris Wilson
2016-05-19 11:32 ` [CI 13/20] drm/i915: Check the CPU cached value of seqno after waking the waiter Chris Wilson
2016-05-19 11:32 ` [CI 14/20] drm/i915: Only apply one barrier after a breadcrumb interrupt is posted Chris Wilson
2016-05-19 11:32 ` [CI 15/20] drm/i915: Stop setting wraparound seqno on initialisation Chris Wilson
2016-05-19 11:32 ` [CI 16/20] drm/i915: Only query timestamp when measuring elapsed time Chris Wilson
2016-05-19 15:44   ` Tvrtko Ursulin
2016-05-20 12:20     ` Chris Wilson
2016-05-23  8:54       ` Tvrtko Ursulin
2016-05-19 11:32 ` [CI 17/20] drm/i915: Convert trace-irq to the breadcrumb waiter Chris Wilson
2016-05-19 11:32 ` [CI 18/20] drm/i915: Move the get/put irq locking into the caller Chris Wilson
2016-05-19 11:32 ` [CI 19/20] drm/i915: Simplify enabling user-interrupts with L3-remapping Chris Wilson
2016-05-19 11:32 ` [CI 20/20] drm/i915: Remove debug noise on detecting fault-injection of missed interrupts Chris Wilson
2016-05-19 12:07 ` ✗ Ro.CI.BAT: warning for series starting with [CI,01/20] drm: Restore double clflush on the last partial cacheline Patchwork
  -- strict thread matches above, loose matches on Subject: below --
2016-07-01 16:23 [CI 01/20] drm/i915/shrinker: Flush active on objects before counting Chris Wilson
2016-07-01 16:23 ` [CI 06/20] drm/i915: Slaughter the thundering i915_wait_request herd Chris Wilson

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=575558C3.4070700@linux.intel.com \
    --to=tvrtko.ursulin@linux.intel.com \
    --cc=chris@chris-wilson.co.uk \
    --cc=intel-gfx@lists.freedesktop.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).