From: Jonathan Cavitt <jonathan.cavitt@intel.com>
To: intel-xe@lists.freedesktop.org
Cc: saurabhg.gupta@intel.com, alex.zuo@intel.com,
jonathan.cavitt@intel.com, joonas.lahtinen@linux.intel.com,
matthew.brost@intel.com, jianxun.zhang@intel.com,
shuicheng.lin@intel.com, dri-devel@lists.freedesktop.org,
Michal.Wajdeczko@intel.com, michal.mrozek@intel.com,
raag.jadav@intel.com, ivan.briano@intel.com,
matthew.auld@intel.com, dafna.hirschfeld@intel.com
Subject: [PATCH v33 2/5] drm/xe/xe_pagefault: Track address precision per pagefault
Date: Fri, 6 Feb 2026 16:47:34 +0000 [thread overview]
Message-ID: <20260206164731.8395-9-jonathan.cavitt@intel.com> (raw)
In-Reply-To: <20260206164731.8395-7-jonathan.cavitt@intel.com>
Add an address precision field to the pagefault consumer. This captures
the fact that pagefaults are reported on a SZ_4K granularity by GuC,
meaning the reported pagefault address is only the address of the page
where the faulting access occurred rather than the exact address of the
fault. This field is necessary in case more reporters are added where
the granularity can be different.
v2:
- Keep u8 values together (Matt Brost)
Suggested-by: Matthew Brost <matthew.brost@intel.com>
Signed-off-by: Jonathan Cavitt <jonathan.cavitt@intel.com>
---
drivers/gpu/drm/xe/xe_guc_pagefault.c | 1 +
drivers/gpu/drm/xe/xe_pagefault.c | 2 ++
drivers/gpu/drm/xe/xe_pagefault_types.h | 8 +++++++-
3 files changed, 10 insertions(+), 1 deletion(-)
diff --git a/drivers/gpu/drm/xe/xe_guc_pagefault.c b/drivers/gpu/drm/xe/xe_guc_pagefault.c
index 719a18187a31..79b790fedda8 100644
--- a/drivers/gpu/drm/xe/xe_guc_pagefault.c
+++ b/drivers/gpu/drm/xe/xe_guc_pagefault.c
@@ -74,6 +74,7 @@ int xe_guc_pagefault_handler(struct xe_guc *guc, u32 *msg, u32 len)
<< PFD_VIRTUAL_ADDR_HI_SHIFT) |
(FIELD_GET(PFD_VIRTUAL_ADDR_LO, msg[2]) <<
PFD_VIRTUAL_ADDR_LO_SHIFT);
+ pf.consumer.addr_precision = 12;
pf.consumer.asid = FIELD_GET(PFD_ASID, msg[1]);
pf.consumer.access_type = FIELD_GET(PFD_ACCESS_TYPE, msg[2]);
pf.consumer.fault_type = FIELD_GET(PFD_FAULT_TYPE, msg[2]);
diff --git a/drivers/gpu/drm/xe/xe_pagefault.c b/drivers/gpu/drm/xe/xe_pagefault.c
index 922a4f3344b1..a24de27eb303 100644
--- a/drivers/gpu/drm/xe/xe_pagefault.c
+++ b/drivers/gpu/drm/xe/xe_pagefault.c
@@ -231,6 +231,7 @@ static void xe_pagefault_print(struct xe_pagefault *pf)
{
xe_gt_info(pf->gt, "\n\tASID: %d\n"
"\tFaulted Address: 0x%08x%08x\n"
+ "\tAddress Precision: %lu\n"
"\tFaultType: %d\n"
"\tAccessType: %d\n"
"\tFaultLevel: %d\n"
@@ -239,6 +240,7 @@ static void xe_pagefault_print(struct xe_pagefault *pf)
pf->consumer.asid,
upper_32_bits(pf->consumer.page_addr),
lower_32_bits(pf->consumer.page_addr),
+ BIT(pf->consumer.addr_precision),
pf->consumer.fault_type,
pf->consumer.access_type,
pf->consumer.fault_level,
diff --git a/drivers/gpu/drm/xe/xe_pagefault_types.h b/drivers/gpu/drm/xe/xe_pagefault_types.h
index d3b516407d60..333db12713ef 100644
--- a/drivers/gpu/drm/xe/xe_pagefault_types.h
+++ b/drivers/gpu/drm/xe/xe_pagefault_types.h
@@ -67,6 +67,12 @@ struct xe_pagefault {
u64 page_addr;
/** @consumer.asid: address space ID */
u32 asid;
+ /**
+ * @consumer.addr_precision: precision of the page fault address.
+ * u8 rather than u32 to keep compact - actual precision is
+ * BIT(consumer.addr_precision). Currently only 12
+ */
+ u8 addr_precision;
/**
* @consumer.access_type: access type, u8 rather than enum to
* keep size compact
@@ -85,7 +91,7 @@ struct xe_pagefault {
/** @consumer.engine_instance: engine instance */
u8 engine_instance;
/** consumer.reserved: reserved bits for future expansion */
- u8 reserved[7];
+ u8 reserved[6];
} consumer;
/**
* @producer: State for the producer (i.e., HW/FW interface). Populated
--
2.43.0
next prev parent reply other threads:[~2026-02-06 16:47 UTC|newest]
Thread overview: 15+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-02-06 16:47 [PATCH v33 0/5] drm/xe/xe_vm: Implement xe_vm_get_property_ioctl Jonathan Cavitt
2026-02-06 16:47 ` [PATCH v33 1/5] drm/xe/xe_pagefault: Disallow writes to read-only VMAs Jonathan Cavitt
2026-02-06 16:47 ` Jonathan Cavitt [this message]
2026-02-13 20:58 ` [PATCH v33 2/5] drm/xe/xe_pagefault: Track address precision per pagefault Lin, Shuicheng
2026-02-13 21:03 ` Matthew Brost
2026-02-13 21:09 ` Cavitt, Jonathan
2026-02-06 16:47 ` [PATCH v33 3/5] drm/xe/uapi: Define drm_xe_vm_get_property Jonathan Cavitt
2026-02-06 16:47 ` [PATCH v33 4/5] drm/xe/xe_vm: Add per VM fault info Jonathan Cavitt
2026-02-06 16:47 ` [PATCH v33 5/5] drm/xe/xe_vm: Implement xe_vm_get_property_ioctl Jonathan Cavitt
2026-02-06 16:52 ` ✗ CI.checkpatch: warning for " Patchwork
2026-02-06 16:53 ` ✓ CI.KUnit: success " Patchwork
2026-02-06 17:34 ` ✗ Xe.CI.BAT: failure " Patchwork
2026-02-09 15:10 ` Cavitt, Jonathan
2026-02-07 16:58 ` ✗ Xe.CI.FULL: " Patchwork
2026-02-09 15:11 ` Cavitt, Jonathan
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20260206164731.8395-9-jonathan.cavitt@intel.com \
--to=jonathan.cavitt@intel.com \
--cc=Michal.Wajdeczko@intel.com \
--cc=alex.zuo@intel.com \
--cc=dafna.hirschfeld@intel.com \
--cc=dri-devel@lists.freedesktop.org \
--cc=intel-xe@lists.freedesktop.org \
--cc=ivan.briano@intel.com \
--cc=jianxun.zhang@intel.com \
--cc=joonas.lahtinen@linux.intel.com \
--cc=matthew.auld@intel.com \
--cc=matthew.brost@intel.com \
--cc=michal.mrozek@intel.com \
--cc=raag.jadav@intel.com \
--cc=saurabhg.gupta@intel.com \
--cc=shuicheng.lin@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox