From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-1.3 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, HK_RANDOM_FROM,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_SANE_1 autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id CB44EC2D0CE for ; Tue, 21 Jan 2020 17:43:41 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id AB30B21569 for ; Tue, 21 Jan 2020 17:43:41 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org AB30B21569 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.intel.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=intel-gfx-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 3DFC56EE06; Tue, 21 Jan 2020 17:43:41 +0000 (UTC) Received: from mga07.intel.com (mga07.intel.com [134.134.136.100]) by gabe.freedesktop.org (Postfix) with ESMTPS id 64DAA6EE06 for ; Tue, 21 Jan 2020 17:43:40 +0000 (UTC) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga006.jf.intel.com ([10.7.209.51]) by orsmga105.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 21 Jan 2020 09:43:39 -0800 X-IronPort-AV: E=Sophos;i="5.70,346,1574150400"; d="scan'208";a="228926498" Received: from wmszyfel-mobl2.ger.corp.intel.com (HELO [10.252.10.247]) ([10.252.10.247]) by orsmga006-auth.jf.intel.com with ESMTP/TLS/AES256-SHA; 21 Jan 2020 09:43:39 -0800 To: Chris Wilson , intel-gfx@lists.freedesktop.org References: <20200121100927.114886-1-chris@chris-wilson.co.uk> <20200121130411.267092-1-chris@chris-wilson.co.uk> <524735a8-dc0c-fdfc-941a-5cc3afaac40e@linux.intel.com> <157961563444.4434.6318084724990340871@skylake-alporthouse-com> <31d2ce9f-2a72-7471-1ad4-26ffa7091be6@linux.intel.com> <157962793102.5216.10310770620304053074@skylake-alporthouse-com> From: Tvrtko Ursulin Organization: Intel Corporation UK Plc Message-ID: <341a33c9-d378-ee0f-bc35-cb11d1288732@linux.intel.com> Date: Tue, 21 Jan 2020 17:43:37 +0000 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101 Thunderbird/60.9.0 MIME-Version: 1.0 In-Reply-To: <157962793102.5216.10310770620304053074@skylake-alporthouse-com> Content-Language: en-US Subject: Re: [Intel-gfx] [PATCH v3] drm/i915/execlists: Reclaim the hanging virtual request X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Transfer-Encoding: 7bit Content-Type: text/plain; charset="us-ascii"; Format="flowed" Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" On 21/01/2020 17:32, Chris Wilson wrote: > Quoting Tvrtko Ursulin (2020-01-21 17:19:52) >> >> On 21/01/2020 14:07, Chris Wilson wrote: >>> Quoting Tvrtko Ursulin (2020-01-21 13:55:29) >>>> >>>> >>>> On 21/01/2020 13:04, Chris Wilson wrote: >>>>> + GEM_BUG_ON(!reset_in_progress(&engine->execlists)); >>>>> + >>>>> + /* >>>>> + * An unsubmitted request along a virtual engine will >>>>> + * remain on the active (this) engine until we are able >>>>> + * to process the context switch away (and so mark the >>>>> + * context as no longer in flight). That cannot have happened >>>>> + * yet, otherwise we would not be hanging! >>>>> + */ >>>>> + spin_lock_irqsave(&ve->base.active.lock, flags); >>>>> + GEM_BUG_ON(intel_context_inflight(rq->context) != engine); >>>>> + GEM_BUG_ON(ve->request != rq); >>>>> + ve->request = NULL; >>>>> + spin_unlock_irqrestore(&ve->base.active.lock, flags); >>>>> + >>>>> + rq->engine = engine; >>>> >>>> Lets see I understand this... tasklet has been disabled and ring paused. >>>> But we find an uncompleted request in the ELSP context, with rq->engine >>>> == virtual engine. Therefore this cannot be the first request on this >>>> timeline but has to be later. >>> >>> Not quite. >>> >>> engine->execlists.active[] tracks the HW, it get's updated only upon >>> receiving HW acks (or we reset). >>> >>> So if execlists_active()->engine == virtual, it can only mean that the >>> inflight _hanging_ request has already been unsubmitted by an earlier >>> preemption in execlists_dequeue(), but that preemption has not yet been >>> processed by the HW. (Hence the preemption-reset underway.) >>> >>> Now while we coalesce the requests for a context into a single ELSP[] >>> slot, and only record the last request submitted for a context, we have >>> to walk back along that context's timeline to find the earliest >>> incomplete request and blame the hang upon it. >>> >>> For a virtual engine, it's much simpler as there is only ever one >>> request in flight, but I don't think that has any impact here other >>> than that we only need to repair the single unsubmitted request that was >>> returned to the virtual engine. >>> >>>> One which has been put on the runqueue but >>>> not yet submitted to hw. (Because one at a time.) Or it has been >>>> unsubmitted by __unwind_incomplete_request already. In the former case >>>> why move it to the physical engine? Also in the latter actually, it >>>> would overwrite rq->engine with the physical one. >>> >>> Yes. For incomplete preemption event, the request is *still* on this >>> engine and has not been released (rq->context->inflight == engine, so it >>> cannot be submitted to any other engine, until after we acknowledge the >>> context has been saved and is no longer being accessed by HW.) It is >>> legal for us to process the hanging request along this engine; we have a >>> suboptimal decision to return the request to the same engine after the >>> reset, but since we have replaced the hanging payload, the request is a >>> mere signaling placeholder (and I do not think will overly burden the >>> system and negatively impact other virtual engines). >> >> What if the request in elsp actually completed in the meantime eg. >> preemption timeout was a false positive? >> >> In execlists_capture we do: >> >> cap->rq = execlists_active(&engine->execlists); >> >> This gets a completed request, then we do: >> >> cap->rq = active_request(cap->rq->context->timeline, cap->rq); >> >> This walks along the virtual timeline and finds a next virtual request. >> It then binds this request to a physical engine and sets ve->request to >> NULL. > > If we miss the completion event, then active_request() returns the > original request and we blame it for a having a 650ms preemption-off > shader with a 640ms preemption timeout. I am thinking of this sequence of interleaved events: preempt_timeout tasklet_disable ring_pause execlist_active seqno write visible active_request - walks the tl to next execlist_hold schedule_worker tasklet_enable process_csb completed This is not possible? Seqno write happening needs only to be roughly there since as long as tasklet has been disabled execlist->active remains fixed. >> Then on unhold ve->submit_notify is called which sets ve->request to >> this request but the rq->engine points to the physical engine. > > We don't call ve->submit_notify() on unhold, we put it back into our > local priority queue. Keeping ownership of the request on the local > engine seemed to the easiest way to keep track of the locking, and > re-submitting the guilty request on the same engine should not be an > issue. True, I am jumping between different things and have confused this bit. Regards, Tvrtko _______________________________________________ Intel-gfx mailing list Intel-gfx@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/intel-gfx