intel-gfx.lists.freedesktop.org archive mirror
 help / color / mirror / Atom feed
From: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>
To: Chris Wilson <chris@chris-wilson.co.uk>, intel-gfx@lists.freedesktop.org
Subject: Re: [PATCH v7] drm/i915/execlists: Direct submission of new requests (avoid tasklet/ksoftirqd)
Date: Thu, 28 Jun 2018 14:59:20 +0100	[thread overview]
Message-ID: <889f624c-ff90-c5df-74c9-b86f44bbb064@linux.intel.com> (raw)
In-Reply-To: <153019251194.8693.15386235509479306119@mail.alporthouse.com>


On 28/06/2018 14:28, Chris Wilson wrote:
> Quoting Tvrtko Ursulin (2018-06-28 14:21:06)
>>
>> On 28/06/2018 14:11, Chris Wilson wrote:
>>> +/*
>>> + * Check the unread Context Status Buffers and manage the submission of new
>>> + * contexts to the ELSP accordingly.
>>> + */
>>> +static void execlists_submission_tasklet(unsigned long data)
>>> +{
>>> +     struct intel_engine_cs * const engine = (struct intel_engine_cs *)data;
>>> +     unsigned long flags;
>>> +
>>> +     GEM_TRACE("%s awake?=%d, active=%x\n",
>>> +               engine->name,
>>> +               engine->i915->gt.awake,
>>> +               engine->execlists.active);
>>> +
>>> +     spin_lock_irqsave(&engine->timeline.lock, flags);
>>> +
>>> +     if (engine->i915->gt.awake) /* we may be delayed until after we idle! */
>>
>> No races between the check and tasklet call? In other words the code
>> path which you were describing that can race is taking the timeline lock?
> 
> intel_engine_is_idle() is unserialised.

Okay, think I understand. Ship it!

Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>

Regards,

Tvrtko

>>> +             __execlists_submission_tasklet(engine);
>>> +
>>> +     spin_unlock_irqrestore(&engine->timeline.lock, flags);
>>> +}
>>> +
>>>    static void queue_request(struct intel_engine_cs *engine,
>>>                          struct i915_sched_node *node,
>>>                          int prio)
>>> @@ -1144,16 +1155,30 @@ static void queue_request(struct intel_engine_cs *engine,
>>>                      &lookup_priolist(engine, prio)->requests);
>>>    }
>>>    
>>> -static void __submit_queue(struct intel_engine_cs *engine, int prio)
>>> +static void __update_queue(struct intel_engine_cs *engine, int prio)
>>>    {
>>>        engine->execlists.queue_priority = prio;
>>> -     tasklet_hi_schedule(&engine->execlists.tasklet);
>>> +}
>>> +
>>> +static void __submit_queue_imm(struct intel_engine_cs *engine)
>>> +{
>>> +     struct intel_engine_execlists * const execlists = &engine->execlists;
>>> +
>>> +     if (reset_in_progress(execlists))
>>> +             return; /* defer until we restart the engine following reset */
>>
>> We have a tasklet kick somewhere in that path?
> 
> In execlists_reset_finish()
> 
>>> +     if (execlists->tasklet.func == execlists_submission_tasklet)
>>> +             __execlists_submission_tasklet(engine);
>>> +     else
>>> +             tasklet_hi_schedule(&execlists->tasklet);
>>>    }
>>>    
>>>    static void submit_queue(struct intel_engine_cs *engine, int prio)
>>>    {
>>> -     if (prio > engine->execlists.queue_priority)
>>> -             __submit_queue(engine, prio);
>>> +     if (prio > engine->execlists.queue_priority) {
>>> +             __update_queue(engine, prio);
>>> +             __submit_queue_imm(engine);
>>> +     }
>>>    }
>>>    
>>>    static void execlists_submit_request(struct i915_request *request)
>>> @@ -1165,11 +1190,12 @@ static void execlists_submit_request(struct i915_request *request)
>>>        spin_lock_irqsave(&engine->timeline.lock, flags);
>>>    
>>>        queue_request(engine, &request->sched, rq_prio(request));
>>> -     submit_queue(engine, rq_prio(request));
>>>    
>>>        GEM_BUG_ON(!engine->execlists.first);
>>>        GEM_BUG_ON(list_empty(&request->sched.link));
>>>    
>>> +     submit_queue(engine, rq_prio(request));
>>> +
>>>        spin_unlock_irqrestore(&engine->timeline.lock, flags);
>>>    }
>>>    
>>> @@ -1296,8 +1322,11 @@ static void execlists_schedule(struct i915_request *request,
>>>                }
>>>    
>>>                if (prio > engine->execlists.queue_priority &&
>>> -                 i915_sw_fence_done(&sched_to_request(node)->submit))
>>> -                     __submit_queue(engine, prio);
>>> +                 i915_sw_fence_done(&sched_to_request(node)->submit)) {
>>> +                     /* defer submission until after all of our updates */
>>> +                     __update_queue(engine, prio);
>>> +                     tasklet_hi_schedule(&engine->execlists.tasklet);
>>
>> _imm suffix on __submit_queue_imm sounds unused if there is no plain
>> __submit_queue, which could have been used here. But meh.
> 
> I thought of trying to emphasis the immediate nature of this path. It's
> not a huge deal, but I didn't particularly like calling it
> direct_submit_queue() (or just direct_submit(), too many submits!)
> __ prefix to indicate that it's an inner function to submit_queue,
> _imm suffix to indicate what's special.
> -Chris
> 
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

  reply	other threads:[~2018-06-28 13:59 UTC|newest]

Thread overview: 23+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-06-28 12:33 [PATCH 1/9] drm/i915: Drop posting reads to flush master interrupts Chris Wilson
2018-06-28 12:33 ` [PATCH 2/9] drm/i915/execlists: Pull submit after dequeue under timeline lock Chris Wilson
2018-06-28 12:33 ` [PATCH 3/9] drm/i915/execlists: Pull CSB reset under the timeline.lock Chris Wilson
2018-06-28 12:33 ` [PATCH 4/9] drm/i915/execlists: Process one CSB update at a time Chris Wilson
2018-06-28 12:33 ` [PATCH 5/9] drm/i915/execlists: Unify CSB access pointers Chris Wilson
2018-06-28 12:33 ` [PATCH 6/9] drm/i915/execlists: Reset CSB write pointer after reset Chris Wilson
2018-06-28 12:33 ` [PATCH 7/9] drm/i915/execlists: Stop storing the CSB read pointer in the mmio register Chris Wilson
2018-06-28 12:33 ` [PATCH 8/9] drm/i915/execlists: Trust the CSB Chris Wilson
2018-06-28 13:04   ` Tvrtko Ursulin
2018-06-28 12:33 ` [PATCH 9/9] drm/i915/execlists: Direct submission of new requests (avoid tasklet/ksoftirqd) Chris Wilson
2018-06-28 13:11   ` [PATCH v7] " Chris Wilson
2018-06-28 13:21     ` Tvrtko Ursulin
2018-06-28 13:28       ` Chris Wilson
2018-06-28 13:59         ` Tvrtko Ursulin [this message]
2018-06-28 13:33   ` Chris Wilson
2018-06-28 12:50 ` ✗ Fi.CI.CHECKPATCH: warning for series starting with [1/9] drm/i915: Drop posting reads to flush master interrupts Patchwork
2018-06-28 13:05 ` ✓ Fi.CI.BAT: success " Patchwork
2018-06-28 13:06 ` [PATCH 1/9] " Ville Syrjälä
2018-06-28 13:11   ` Chris Wilson
2018-06-28 13:28 ` ✗ Fi.CI.BAT: failure for series starting with [1/9] drm/i915: Drop posting reads to flush master interrupts (rev2) Patchwork
2018-06-28 14:12 ` ✗ Fi.CI.CHECKPATCH: warning for series starting with [1/9] drm/i915: Drop posting reads to flush master interrupts (rev3) Patchwork
2018-06-28 14:28 ` ✓ Fi.CI.BAT: success " Patchwork
2018-06-28 16:58 ` ✓ Fi.CI.IGT: " Patchwork

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=889f624c-ff90-c5df-74c9-b86f44bbb064@linux.intel.com \
    --to=tvrtko.ursulin@linux.intel.com \
    --cc=chris@chris-wilson.co.uk \
    --cc=intel-gfx@lists.freedesktop.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).