From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 0F36AC3600C for ; Thu, 3 Apr 2025 18:41:08 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id C2AD610EA4E; Thu, 3 Apr 2025 18:41:07 +0000 (UTC) Authentication-Results: gabe.freedesktop.org; dkim=pass (2048-bit key; unprotected) header.d=intel.com header.i=@intel.com header.b="UACDjxnv"; dkim-atps=neutral Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.9]) by gabe.freedesktop.org (Postfix) with ESMTPS id CF5A810EA4E for ; Thu, 3 Apr 2025 18:41:05 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1743705666; x=1775241666; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=td9Si04EzIzIxJau2TJ1dJJy9zHDJa0tkY9/6/w2+T0=; b=UACDjxnvlGCuoC2j0nA8Hvveh5shpZPbpZQkRPqDXlOBb5mQjjNIoqNK 6oni34Vq/DcKy9sCgN96w0MOERFeYRnCssbFttyJmFQc6SLeMn4Y9y6X8 qDpUTJMVK6XT6gp9+45CLCCh0zPR9K8wNQ7oOx23JDNWQBtdFpGlsss8F 9hx5CJHFL+oHP7RUO5MKe2ymSdMB3A5ldiBs2+0nMd+VHQUa+6qXkihdb oDarSU9kONMsjuU20yZLTxsy4mJ+HPZthiNzNS9uXPSEX0B0cTvBSts9M R4bAK1NI5R0O+8F2KLbsZhUTFL+VyRXLzHLnjq/+bpJDzeTEyPMd0eE2f g==; X-CSE-ConnectionGUID: nAQPEDggT4anhW6iZM3PJQ== X-CSE-MsgGUID: A2Tc8QqYRf2DzGVmZQnMVA== X-IronPort-AV: E=McAfee;i="6700,10204,11393"; a="67604292" X-IronPort-AV: E=Sophos;i="6.15,186,1739865600"; d="scan'208";a="67604292" Received: from fmviesa004.fm.intel.com ([10.60.135.144]) by orvoesa101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 03 Apr 2025 11:41:06 -0700 X-CSE-ConnectionGUID: jC0UM7EbRuO4PVn0Vk9rWw== X-CSE-MsgGUID: OnAmVSemRNqNdDLve1MFxA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.15,186,1739865600"; d="scan'208";a="132216421" Received: from gkczarna.igk.intel.com ([10.211.131.163]) by fmviesa004.fm.intel.com with ESMTP; 03 Apr 2025 11:41:04 -0700 From: Tomasz Lis To: intel-xe@lists.freedesktop.org Cc: =?UTF-8?q?Micha=C5=82=20Winiarski?= , =?UTF-8?q?Micha=C5=82=20Wajdeczko?= , =?UTF-8?q?Piotr=20Pi=C3=B3rkowski?= , Matthew Brost , Lucas De Marchi Subject: [PATCH v7 4/4] drm/xe/vf: Fixup CTB send buffer messages after migration Date: Thu, 3 Apr 2025 20:40:55 +0200 Message-Id: <20250403184055.2317409-5-tomasz.lis@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20250403184055.2317409-1-tomasz.lis@intel.com> References: <20250403184055.2317409-1-tomasz.lis@intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-BeenThere: intel-xe@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel Xe graphics driver List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: intel-xe-bounces@lists.freedesktop.org Sender: "Intel-xe" During post-migration recovery of a VF, it is necessary to update GGTT references included in messages which are going to be sent to GuC. GuC will start consuming messages after VF KMD will inform it about fixups being done; before that, the VF KMD is expected to update any H2G messages which are already in send buffer but were not consumed by GuC. Only a small subset of messages allowed for VFs have GGTT references in them. This patch adds the functionality to parse the CTB send ring buffer and shift addresses contained within. While fixing the CTB content, ct->lock is not taken. This means the only barrier taken remains GGTT address lock - which is ok, because only requests with GGTT addresses matter, but it also means tail changes can happen during the CTB fixups execution (which may be ignored as any new messages will not have anything to fix). The GGTT address locking will be introduced in a future series. v2: removed storing shift as that's now done in VMA nodes patch; macros to inlines; warns to asserts; log messages fixes (Michal) v3: removed inline keywords, enums for offsets in CTB messages, less error messages, if return unused then made functs void (Michal) v4: update the cached head before starting fixups v5: removed/updated comments, wrapped lines, converted assert into error, enums for offsets to separate patch, reused xe_map_rd v6: define xe_map_*_array() macros, support CTB wrap which divides a message, updated comments, moved one function to an earlier patch Signed-off-by: Tomasz Lis --- drivers/gpu/drm/xe/xe_guc_ct.c | 147 +++++++++++++++++++++++++++++++ drivers/gpu/drm/xe/xe_guc_ct.h | 2 + drivers/gpu/drm/xe/xe_map.h | 12 +++ drivers/gpu/drm/xe/xe_sriov_vf.c | 18 ++++ 4 files changed, 179 insertions(+) diff --git a/drivers/gpu/drm/xe/xe_guc_ct.c b/drivers/gpu/drm/xe/xe_guc_ct.c index 686fe664c20d..add2d9b12345 100644 --- a/drivers/gpu/drm/xe/xe_guc_ct.c +++ b/drivers/gpu/drm/xe/xe_guc_ct.c @@ -84,6 +84,8 @@ struct g2h_fence { bool done; }; +#define make_u64(hi, lo) ((u64)((u64)(u32)(hi) << 32 | (u32)(lo))) + static void g2h_fence_init(struct g2h_fence *g2h_fence, u32 *response_buffer) { g2h_fence->response_buffer = response_buffer; @@ -1622,6 +1624,151 @@ static void g2h_worker_func(struct work_struct *w) receive_g2h(ct); } +static void xe_ring_map_fixup_u64(struct xe_device *xe, struct iosys_map *cmds, + u32 size, u32 head, u32 pos, s64 shift) +{ + u32 hi, lo; + u64 offset; + + lo = xe_map_rd_array_u32(xe, cmds, (head + pos) % size); + hi = xe_map_rd_array_u32(xe, cmds, (head + pos + 1) % size); + offset = make_u64(hi, lo); + offset += shift; + lo = lower_32_bits(offset); + hi = upper_32_bits(offset); + xe_map_wr_array_u32(xe, cmds, (head + pos) % size, lo); + xe_map_wr_array_u32(xe, cmds, (head + pos + 1) % size, hi); +} + +/* + * Shift any GGTT addresses within a single message left within CTB from + * before post-migration recovery. + * @ct: pointer to CT struct of the target GuC + * @cmds: iomap buffer containing CT messages + * @head: start of the target message within the buffer + * @len: length of the target message + * @size: size of the commands buffer + * @shift: the address shift to be added to each GGTT reference + */ +static void ct_update_addresses_in_message(struct xe_guc_ct *ct, + struct iosys_map *cmds, u32 head, + u32 len, u32 size, s64 shift) +{ + struct xe_device *xe = ct_to_xe(ct); + u32 action, i, n; + u32 msg[1]; + + xe_map_memcpy_from(xe, msg, cmds, head * sizeof(u32), + 1 * sizeof(u32)); + action = FIELD_GET(GUC_HXG_REQUEST_MSG_0_ACTION, msg[0]); + switch (action) { + case XE_GUC_ACTION_REGISTER_CONTEXT: + case XE_GUC_ACTION_REGISTER_CONTEXT_MULTI_LRC: + xe_ring_map_fixup_u64(xe, cmds, size, head, + XE_GUC_REGISTER_CONTEXT_MULTI_LRC_DATA_5_WQ_DESC_ADDR_LOWER, + shift); + xe_ring_map_fixup_u64(xe, cmds, size, head, + XE_GUC_REGISTER_CONTEXT_MULTI_LRC_DATA_7_WQ_BUF_BASE_LOWER, + shift); + if (action == XE_GUC_ACTION_REGISTER_CONTEXT_MULTI_LRC) { + n = xe_map_rd_array_u32(xe, cmds, head + + XE_GUC_REGISTER_CONTEXT_MULTI_LRC_DATA_10_NUM_CTXS); + for (i = 0; i < n; i++) + xe_ring_map_fixup_u64(xe, cmds, size, head, + XE_GUC_REGISTER_CONTEXT_MULTI_LRC_DATA_11_HW_LRC_ADDR + + 2 * i, shift); + } else { + xe_ring_map_fixup_u64(xe, cmds, size, head, + XE_GUC_REGISTER_CONTEXT_DATA_10_HW_LRC_ADDR, shift); + } + break; + default: + break; + } +} + +static int ct_update_addresses_in_buffer(struct xe_guc_ct *ct, + struct guc_ctb *h2g, + s64 shift, u32 *mhead, s32 avail) +{ + struct xe_device *xe = ct_to_xe(ct); + u32 head = *mhead; + u32 size = h2g->info.size; + u32 msg[1]; + u32 len; + + /* Read header */ + xe_map_memcpy_from(xe, msg, &h2g->cmds, sizeof(u32) * head, + sizeof(u32)); + len = FIELD_GET(GUC_CTB_MSG_0_NUM_DWORDS, msg[0]) + GUC_CTB_MSG_MIN_LEN; + + if (unlikely(len > (u32)avail)) { + xe_gt_err(ct_to_gt(ct), "H2G channel broken on read, avail=%d, len=%d, fixups skipped\n", + avail, len); + return 0; + } + + head = (head + 1) % size; + ct_update_addresses_in_message(ct, &h2g->cmds, head, len - 1, size, shift); + *mhead = (head + len - 1) % size; + + return avail - len; +} + +/** + * xe_guc_ct_fixup_messages_with_ggtt - Fixup any pending H2G CTB messages + * @ct: pointer to CT struct of the target GuC + * @ggtt_shift: shift to be added to all GGTT addresses within the CTB + * + * Messages in guc-to-host CTB are owned by GuC and any fixups in them + * are made by GuC. But content of the host-to-guc CTB is owned by the + * KMD, so fixups to GGTT references in any pending messages need to be + * applied here. + * This function updates GGTT offsets in payloads of pending H2G CTB + * messages (messages which were not consumed by GuC before the VF got + * paused). + */ +void xe_guc_ct_fixup_messages_with_ggtt(struct xe_guc_ct *ct, s64 ggtt_shift) +{ + struct xe_guc *guc = ct_to_guc(ct); + struct xe_gt *gt = guc_to_gt(guc); + struct guc_ctb *h2g = &ct->ctbs.h2g; + u32 head, tail, size; + s32 avail; + + if (unlikely(h2g->info.broken)) + return; + + h2g->info.head = desc_read(ct_to_xe(ct), h2g, head); + head = h2g->info.head; + tail = READ_ONCE(h2g->info.tail); + size = h2g->info.size; + + if (unlikely(head > size)) + goto corrupted; + + if (unlikely(tail >= size)) + goto corrupted; + + avail = tail - head; + + /* beware of buffer wrap case */ + if (unlikely(avail < 0)) + avail += size; + xe_gt_dbg(gt, "available %d (%u:%u:%u)\n", avail, head, tail, size); + xe_gt_assert(gt, avail >= 0); + + while (avail > 0) + avail = ct_update_addresses_in_buffer(ct, h2g, ggtt_shift, &head, avail); + + return; + +corrupted: + xe_gt_err(gt, "Corrupted H2G descriptor head=%u tail=%u size=%u, fixups not applied\n", + head, tail, size); + h2g->info.broken = true; +} + static struct xe_guc_ct_snapshot *guc_ct_snapshot_alloc(struct xe_guc_ct *ct, bool atomic, bool want_ctb) { diff --git a/drivers/gpu/drm/xe/xe_guc_ct.h b/drivers/gpu/drm/xe/xe_guc_ct.h index 82c4ae458dda..5649bda82823 100644 --- a/drivers/gpu/drm/xe/xe_guc_ct.h +++ b/drivers/gpu/drm/xe/xe_guc_ct.h @@ -22,6 +22,8 @@ void xe_guc_ct_snapshot_print(struct xe_guc_ct_snapshot *snapshot, struct drm_pr void xe_guc_ct_snapshot_free(struct xe_guc_ct_snapshot *snapshot); void xe_guc_ct_print(struct xe_guc_ct *ct, struct drm_printer *p, bool want_ctb); +void xe_guc_ct_fixup_messages_with_ggtt(struct xe_guc_ct *ct, s64 ggtt_shift); + static inline bool xe_guc_ct_enabled(struct xe_guc_ct *ct) { return ct->state == XE_GUC_CT_STATE_ENABLED; diff --git a/drivers/gpu/drm/xe/xe_map.h b/drivers/gpu/drm/xe/xe_map.h index f62e0c8b67ab..db98c8fb121f 100644 --- a/drivers/gpu/drm/xe/xe_map.h +++ b/drivers/gpu/drm/xe/xe_map.h @@ -78,6 +78,18 @@ static inline void xe_map_write32(struct xe_device *xe, struct iosys_map *map, iosys_map_wr(map__, offset__, type__, val__); \ }) +#define xe_map_rd_array(xe__, map__, index__, type__) \ + xe_map_rd(xe__, map__, (index__) * sizeof(type__), type__) + +#define xe_map_wr_array(xe__, map__, index__, type__, val__) \ + xe_map_wr(xe__, map__, (index__) * sizeof(type__), type__, val__) + +#define xe_map_rd_array_u32(xe__, map__, index__) \ + xe_map_rd_array(xe__, map__, index__, u32) + +#define xe_map_wr_array_u32(xe__, map__, index__, val__) \ + xe_map_wr_array(xe__, map__, index__, u32, val__) + #define xe_map_rd_field(xe__, map__, struct_offset__, struct_type__, field__) ({ \ struct xe_device *__xe = xe__; \ xe_device_assert_mem_access(__xe); \ diff --git a/drivers/gpu/drm/xe/xe_sriov_vf.c b/drivers/gpu/drm/xe/xe_sriov_vf.c index e70f1ceabbb3..2674fa948fda 100644 --- a/drivers/gpu/drm/xe/xe_sriov_vf.c +++ b/drivers/gpu/drm/xe/xe_sriov_vf.c @@ -10,6 +10,7 @@ #include "xe_gt.h" #include "xe_gt_sriov_printk.h" #include "xe_gt_sriov_vf.h" +#include "xe_guc_ct.h" #include "xe_pm.h" #include "xe_sriov.h" #include "xe_sriov_printk.h" @@ -158,6 +159,20 @@ static int vf_post_migration_requery_guc(struct xe_device *xe) return ret; } +static void vf_post_migration_fixup_ctb(struct xe_device *xe) +{ + struct xe_gt *gt; + unsigned int id; + + xe_assert(xe, IS_SRIOV_VF(xe)); + + for_each_gt(gt, xe, id) { + s32 shift = xe_gt_sriov_vf_ggtt_shift(gt); + + xe_guc_ct_fixup_messages_with_ggtt(>->uc.guc.ct, shift); + } +} + /* * vf_post_migration_imminent - Check if post-restore recovery is coming. * @xe: the &xe_device struct instance @@ -224,6 +239,9 @@ static void vf_post_migration_recovery(struct xe_device *xe) need_fixups = vf_post_migration_fixup_ggtt_nodes(xe); /* FIXME: add the recovery steps */ + if (need_fixups) + vf_post_migration_fixup_ctb(xe); + vf_post_migration_notify_resfix_done(xe); xe_pm_runtime_put(xe); drm_notice(&xe->drm, "migration recovery ended\n"); -- 2.25.1