From: Umesh Nerlige Ramappa <umesh.nerlige.ramappa@intel.com>
To: intel-xe@lists.freedesktop.org
Cc: badal.nilawar@intel.com, lucas.demarchi@intel.com,
ashutosh.dixit@intel.com
Subject: [PATCH v2 4/4] drm/xe/remapper: Reprogram remapper index on PM resume events
Date: Mon, 17 Nov 2025 12:53:20 -0800 [thread overview]
Message-ID: <20251117205315.1458477-10-umesh.nerlige.ramappa@intel.com> (raw)
In-Reply-To: <20251117205315.1458477-6-umesh.nerlige.ramappa@intel.com>
Device enters the D3 cold state during both runtime and system suspend,
which requires reprogramming the SoC re-mapper index
Signed-off-by: Umesh Nerlige Ramappa <umesh.nerlige.ramappa@intel.com>
---
drivers/gpu/drm/xe/xe_device_types.h | 6 ++++++
drivers/gpu/drm/xe/xe_pm.c | 5 +++++
drivers/gpu/drm/xe/xe_soc_remapper.c | 17 ++++++++++++++++-
drivers/gpu/drm/xe/xe_soc_remapper.h | 1 +
4 files changed, 28 insertions(+), 1 deletion(-)
diff --git a/drivers/gpu/drm/xe/xe_device_types.h b/drivers/gpu/drm/xe/xe_device_types.h
index de23fff3262c..9875e3db4a1f 100644
--- a/drivers/gpu/drm/xe/xe_device_types.h
+++ b/drivers/gpu/drm/xe/xe_device_types.h
@@ -551,6 +551,12 @@ struct xe_device {
struct {
/* Serialize access to SoC Remapper's index registers */
spinlock_t lock;
+
+ /* Last value of INDEX1 register */
+ u32 state;
+
+ /* A flag indicating state is initialized */
+ bool state_initialized;
} soc_remapper;
/**
diff --git a/drivers/gpu/drm/xe/xe_pm.c b/drivers/gpu/drm/xe/xe_pm.c
index 44924512830f..8a1b440df4ce 100644
--- a/drivers/gpu/drm/xe/xe_pm.c
+++ b/drivers/gpu/drm/xe/xe_pm.c
@@ -24,6 +24,7 @@
#include "xe_late_bind_fw.h"
#include "xe_pcode.h"
#include "xe_pxp.h"
+#include "xe_soc_remapper.h"
#include "xe_sriov_vf_ccs.h"
#include "xe_trace.h"
#include "xe_vm.h"
@@ -236,6 +237,8 @@ int xe_pm_resume(struct xe_device *xe)
drm_dbg(&xe->drm, "Resuming device\n");
trace_xe_pm_resume(xe, __builtin_return_address(0));
+ xe_soc_remapper_resume(xe);
+
for_each_gt(gt, xe, id)
xe_gt_idle_disable_c6(gt);
@@ -633,6 +636,8 @@ int xe_pm_runtime_resume(struct xe_device *xe)
xe_rpm_lockmap_acquire(xe);
+ xe_soc_remapper_resume(xe);
+
for_each_gt(gt, xe, id)
xe_gt_idle_disable_c6(gt);
diff --git a/drivers/gpu/drm/xe/xe_soc_remapper.c b/drivers/gpu/drm/xe/xe_soc_remapper.c
index ed6b6c594e51..c425195f7152 100644
--- a/drivers/gpu/drm/xe/xe_soc_remapper.c
+++ b/drivers/gpu/drm/xe/xe_soc_remapper.c
@@ -13,9 +13,12 @@ static void xe_soc_remapper_set_region(struct xe_device *xe, struct xe_reg reg,
u32 mask, u32 val)
{
unsigned long flags;
+ u32 old;
spin_lock_irqsave(&xe->soc_remapper.lock, flags);
- xe_mmio_rmw32(xe_root_tile_mmio(xe), reg, mask, val);
+ old = xe_mmio_rmw32(xe_root_tile_mmio(xe), reg, mask, val);
+ xe->soc_remapper.state = (old & ~mask) | val;
+ xe->soc_remapper.state_initialized = true;
spin_unlock_irqrestore(&xe->soc_remapper.lock, flags);
}
@@ -31,6 +34,18 @@ void xe_soc_remapper_set_sysctrl_region(struct xe_device *xe, u32 index)
REG_FIELD_PREP(SG_REMAP_SYSCTRL_MASK, index));
}
+void xe_soc_remapper_resume(struct xe_device *xe)
+{
+ unsigned long flags;
+
+ if (!xe->soc_remapper.state_initialized)
+ return;
+
+ spin_lock_irqsave(&xe->soc_remapper.lock, flags);
+ xe_mmio_write32(xe_root_tile_mmio(xe), SG_REMAP_INDEX1, xe->soc_remapper.state);
+ spin_unlock_irqrestore(&xe->soc_remapper.lock, flags);
+}
+
int xe_soc_remapper_init(struct xe_device *xe)
{
spin_lock_init(&xe->soc_remapper.lock);
diff --git a/drivers/gpu/drm/xe/xe_soc_remapper.h b/drivers/gpu/drm/xe/xe_soc_remapper.h
index 289aa41c3408..507701c74f6f 100644
--- a/drivers/gpu/drm/xe/xe_soc_remapper.h
+++ b/drivers/gpu/drm/xe/xe_soc_remapper.h
@@ -13,5 +13,6 @@
int xe_soc_remapper_init(struct xe_device *xe);
void xe_soc_remapper_set_telem_region(struct xe_device *xe, u32 index);
void xe_soc_remapper_set_sysctrl_region(struct xe_device *xe, u32 index);
+void xe_soc_remapper_resume(struct xe_device *xe);
#endif
--
2.43.0
next prev parent reply other threads:[~2025-11-17 20:53 UTC|newest]
Thread overview: 14+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-11-17 20:53 [PATCH v2 0/4] Add SoC remapper support for system controller Umesh Nerlige Ramappa
2025-11-17 20:53 ` [PATCH v2 1/4] drm/xe/soc_remapper: Initialize SoC remapper during Xe probe Umesh Nerlige Ramappa
2025-11-25 19:14 ` Nilawar, Badal
2025-11-17 20:53 ` [PATCH v2 2/4] drm/xe/soc_remapper: Use SoC remapper herlper from VSEC code Umesh Nerlige Ramappa
2025-12-02 5:03 ` Nilawar, Badal
2025-12-02 20:35 ` Umesh Nerlige Ramappa
2025-11-17 20:53 ` [PATCH v2 3/4] drm/xe/soc_remapper: Add system controller config for SoC remapper Umesh Nerlige Ramappa
2025-11-17 20:53 ` Umesh Nerlige Ramappa [this message]
2025-11-26 14:46 ` [PATCH v2 4/4] drm/xe/remapper: Reprogram remapper index on PM resume events Nilawar, Badal
2025-12-02 21:00 ` Umesh Nerlige Ramappa
2025-11-17 20:59 ` ✗ CI.checkpatch: warning for Add SoC remapper support for system controller (rev2) Patchwork
2025-11-17 21:00 ` ✓ CI.KUnit: success " Patchwork
2025-11-17 21:58 ` ✓ Xe.CI.BAT: " Patchwork
2025-11-17 23:38 ` ✓ Xe.CI.Full: " Patchwork
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20251117205315.1458477-10-umesh.nerlige.ramappa@intel.com \
--to=umesh.nerlige.ramappa@intel.com \
--cc=ashutosh.dixit@intel.com \
--cc=badal.nilawar@intel.com \
--cc=intel-xe@lists.freedesktop.org \
--cc=lucas.demarchi@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox