From: Jonathan Cavitt <jonathan.cavitt@intel.com>
To: intel-xe@lists.freedesktop.org
Cc: saurabhg.gupta@intel.com, alex.zuo@intel.com,
jonathan.cavitt@intel.com, matthew.brost@intel.com,
daniele.ceraolospurio@intel.com, rodrigo.vivi@intel.com,
michal.wajdeczko@intel.com
Subject: [PATCH 4/6] drm/xe/xe_guc_ct: READ_ONCE ct state in xe_guc_ct_initialized
Date: Thu, 18 Dec 2025 15:35:32 +0000 [thread overview]
Message-ID: <20251218153527.6436-12-jonathan.cavitt@intel.com> (raw)
In-Reply-To: <20251218153527.6436-8-jonathan.cavitt@intel.com>
Use READ_ONCE when reading ct->state in xe_guc_ct_initialized to prevent
the compiler from ignoring this operation.
Fixes: 0b93b7dcd9eb ("drm/xe: Fix early wedge on GuC load failure")
Suggested-by: Matthew Brost <matthew.brost@intel.com>
Signed-off-by: Jonathan Cavitt <jonathan.cavitt@intel.com>
Cc: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com>
---
drivers/gpu/drm/xe/xe_guc_ct.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/gpu/drm/xe/xe_guc_ct.h b/drivers/gpu/drm/xe/xe_guc_ct.h
index 5599939f8fe1..d4c9cb4dbcb5 100644
--- a/drivers/gpu/drm/xe/xe_guc_ct.h
+++ b/drivers/gpu/drm/xe/xe_guc_ct.h
@@ -30,7 +30,7 @@ void xe_guc_ct_print(struct xe_guc_ct *ct, struct drm_printer *p, bool want_ctb)
static inline bool xe_guc_ct_initialized(struct xe_guc_ct *ct)
{
- return ct->state != XE_GUC_CT_STATE_NOT_INITIALIZED;
+ return READ_ONCE(ct->state) != XE_GUC_CT_STATE_NOT_INITIALIZED;
}
static inline bool xe_guc_ct_enabled(struct xe_guc_ct *ct)
--
2.43.0
next prev parent reply other threads:[~2025-12-18 15:35 UTC|newest]
Thread overview: 16+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-12-18 15:35 [PATCH 0/6] drm/xe/xe_guc_ct: Prevent compiler read/write optimization breaks Jonathan Cavitt
2025-12-18 15:35 ` [PATCH 1/6] drm/xe/xe_guc_ct: WRITE_ONCE g2h_fence done in g2h_fence_cancel Jonathan Cavitt
2025-12-18 21:08 ` Dixit, Ashutosh
2025-12-18 15:35 ` [PATCH 2/6] drm/xe/xe_guc_ct: WRITE_ONCE g2h_fence done in parse_g2h_response Jonathan Cavitt
2025-12-18 15:35 ` [PATCH 3/6] drm/xe/xe_guc_ct: WRITE_ONCE ct state in guc_ct_change_state Jonathan Cavitt
2025-12-18 15:35 ` Jonathan Cavitt [this message]
2025-12-18 15:35 ` [PATCH 5/6] drm/xe/xe_guc_ct: READ_ONCE ct state in xe_guc_ct_enabled Jonathan Cavitt
2025-12-18 15:35 ` [PATCH 6/6] drm/xe/xe_guc_ct: Justify WRITE_ONCE/READ_ONCE usage Jonathan Cavitt
2025-12-18 21:01 ` Dixit, Ashutosh
2025-12-18 21:03 ` Cavitt, Jonathan
2025-12-18 21:10 ` Dixit, Ashutosh
2025-12-18 21:02 ` Rodrigo Vivi
2025-12-18 16:18 ` ✓ CI.KUnit: success for drm/xe/xe_guc_ct: Prevent compiler read/write optimization breaks (rev3) Patchwork
2025-12-18 16:52 ` ✓ Xe.CI.BAT: " Patchwork
2025-12-18 20:59 ` [PATCH 0/6] drm/xe/xe_guc_ct: Prevent compiler read/write optimization breaks Summers, Stuart
2025-12-19 13:38 ` ✓ Xe.CI.Full: success for drm/xe/xe_guc_ct: Prevent compiler read/write optimization breaks (rev3) Patchwork
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20251218153527.6436-12-jonathan.cavitt@intel.com \
--to=jonathan.cavitt@intel.com \
--cc=alex.zuo@intel.com \
--cc=daniele.ceraolospurio@intel.com \
--cc=intel-xe@lists.freedesktop.org \
--cc=matthew.brost@intel.com \
--cc=michal.wajdeczko@intel.com \
--cc=rodrigo.vivi@intel.com \
--cc=saurabhg.gupta@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox