From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7D39DC7EE23 for ; Tue, 28 Feb 2023 02:13:06 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229677AbjB1CNF (ORCPT ); Mon, 27 Feb 2023 21:13:05 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37878 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229481AbjB1CNE (ORCPT ); Mon, 27 Feb 2023 21:13:04 -0500 Received: from mga06.intel.com (mga06b.intel.com [134.134.136.31]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D50D0279B6 for ; Mon, 27 Feb 2023 18:12:32 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1677550352; x=1709086352; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=u18ddDQOwan1fLFqHOnOS3Odna9gVESGyCeFrwXiY6M=; b=Lh6yrOHZou5vaqBS6Dk2R0wa+2zzRS2g2fqEGJEb4KEkWQRa23ipxLpi X+aeWP752Y3sxti5pw2wRyeQoxrZXNUuvpNX5pXGPO9lBVA/dR4jd8da4 rOEsWQZjEmb3oaxzMsJgBS8E/3AiQZ1agIYvNlTAfRqYhNForqZr0qLvi UABx1DwMBfc2vFfHD5mvZAZ44uuKSN+qwcX0CkbmZJaH2Y8ZuUUb/n91U OjPnFTBzbvGdWE5EXs77LtTEt/QMiJ5HrRIyiyur0ZV9icq+4qcrdtaGx gREj9MpvvuIR1ru1byQvIjCs7RB5VhGHNwGMK4fTV0fxsIpzsJ8pb/jTX g==; X-IronPort-AV: E=McAfee;i="6500,9779,10634"; a="396590568" X-IronPort-AV: E=Sophos;i="5.98,220,1673942400"; d="scan'208";a="396590568" Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Feb 2023 18:12:14 -0800 X-IronPort-AV: E=McAfee;i="6500,9779,10634"; a="742804739" X-IronPort-AV: E=Sophos;i="5.98,220,1673942400"; d="scan'208";a="742804739" Received: from gsavorni-mobl1.ger.corp.intel.com (HELO intel.com) ([10.249.41.82]) by fmsmga004-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Feb 2023 18:12:09 -0800 From: Andi Shyti To: intel-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, stable@vger.kernel.org Cc: Matthew Auld , Maciej Patelczyk , Chris Wilson , Andi Shyti , Andi Shyti Subject: [PATCH v3 2/2] drm/i915/gt: Make sure that errors are propagated through request chains Date: Tue, 28 Feb 2023 03:11:42 +0100 Message-Id: <20230228021142.1905349-3-andi.shyti@linux.intel.com> X-Mailer: git-send-email 2.39.1 In-Reply-To: <20230228021142.1905349-1-andi.shyti@linux.intel.com> References: <20230228021142.1905349-1-andi.shyti@linux.intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org Currently, when we perform operations such as clearing or copying large blocks of memory, we generate multiple requests that are executed in a chain. However, if one of these requests fails, we may not realize it unless it happens to be the last request in the chain. This is because errors are not properly propagated. For this we need to keep propagating the chain of fence notification in order to always reach the final fence associated to the final request. To address this issue, we need to ensure that the chain of fence notifications is always propagated so that we can reach the final fence associated with the last request. By doing so, we will be able to detect any memory operation failures and determine whether the memory is still invalid. On copy and clear migration signal fences upon completion. On copy and clear migration, signal fences upon request completion to ensure that we have a reliable perpetuation of the operation outcome. Fixes: cf586021642d80 ("drm/i915/gt: Pipelined page migration") Reported-by: Matthew Auld Suggested-by: Chris Wilson Signed-off-by: Andi Shyti Cc: stable@vger.kernel.org Reviewed-by: Matthew Auld --- drivers/gpu/drm/i915/gt/intel_migrate.c | 39 ++++++++++++++++++------- 1 file changed, 29 insertions(+), 10 deletions(-) diff --git a/drivers/gpu/drm/i915/gt/intel_migrate.c b/drivers/gpu/drm/i915/gt/intel_migrate.c index 3f638f1987968..6b497640d3a0a 100644 --- a/drivers/gpu/drm/i915/gt/intel_migrate.c +++ b/drivers/gpu/drm/i915/gt/intel_migrate.c @@ -742,13 +742,19 @@ intel_context_migrate_copy(struct intel_context *ce, dst_offset = 2 * CHUNK_SZ; } + /* + * While building the chain of requests, we need to ensure + * that no one can sneak into the timeline unnoticed. + */ + mutex_lock(&ce->timeline->mutex); + do { int len; rq = i915_request_create(ce); if (IS_ERR(rq)) { err = PTR_ERR(rq); - goto out_ce; + break; } if (deps) { @@ -878,10 +884,14 @@ intel_context_migrate_copy(struct intel_context *ce, /* Arbitration is re-enabled between requests. */ out_rq: - if (*out) - i915_request_put(*out); - *out = i915_request_get(rq); + i915_sw_fence_await(&rq->submit); + i915_request_get(rq); i915_request_add(rq); + if (*out) { + i915_sw_fence_complete(&(*out)->submit); + i915_request_put(*out); + } + *out = rq; if (err) break; @@ -905,7 +915,10 @@ intel_context_migrate_copy(struct intel_context *ce, cond_resched(); } while (1); -out_ce: + mutex_unlock(&ce->timeline->mutex); + + if (*out) + i915_sw_fence_complete(&(*out)->submit); return err; } @@ -1005,7 +1018,7 @@ intel_context_migrate_clear(struct intel_context *ce, rq = i915_request_create(ce); if (IS_ERR(rq)) { err = PTR_ERR(rq); - goto out_ce; + break; } if (deps) { @@ -1056,17 +1069,23 @@ intel_context_migrate_clear(struct intel_context *ce, /* Arbitration is re-enabled between requests. */ out_rq: - if (*out) - i915_request_put(*out); - *out = i915_request_get(rq); + i915_sw_fence_await(&rq->submit); + i915_request_get(rq); i915_request_add(rq); + if (*out) { + i915_sw_fence_complete(&(*out)->submit); + i915_request_put(*out); + } + *out = rq; + if (err || !it.sg || !sg_dma_len(it.sg)) break; cond_resched(); } while (1); -out_ce: + if (*out) + i915_sw_fence_complete(&(*out)->submit); return err; } -- 2.39.1