From: Jonathan Cavitt <jonathan.cavitt@intel.com>
To: intel-xe@lists.freedesktop.org
Cc: saurabhg.gupta@intel.com, alex.zuo@intel.com,
jonathan.cavitt@intel.com, joonas.lahtinen@linux.intel.com,
matthew.brost@intel.com, jianxun.zhang@intel.com,
shuicheng.lin@intel.com, dri-devel@lists.freedesktop.org,
Michal.Wajdeczko@intel.com, michal.mrozek@intel.com,
raag.jadav@intel.com
Subject: [PATCH v12 1/5] drm/xe/xe_gt_pagefault: Disallow writes to read-only VMAs
Date: Mon, 24 Mar 2025 23:09:24 +0000 [thread overview]
Message-ID: <20250324230931.63840-2-jonathan.cavitt@intel.com> (raw)
In-Reply-To: <20250324230931.63840-1-jonathan.cavitt@intel.com>
The page fault handler should reject write/atomic access to read only
VMAs. Add code to handle this in handle_pagefault after the VMA lookup.
Fixes: 3d420e9fa848 ("drm/xe: Rework GPU page fault handling")
Signed-off-by: Jonathan Cavitt <jonathan.cavitt@intel.com>
Suggested-by: Matthew Brost <matthew.brost@intel.com>
---
drivers/gpu/drm/xe/xe_gt_pagefault.c | 5 +++++
1 file changed, 5 insertions(+)
diff --git a/drivers/gpu/drm/xe/xe_gt_pagefault.c b/drivers/gpu/drm/xe/xe_gt_pagefault.c
index 9fa11e837dd1..3240890aac07 100644
--- a/drivers/gpu/drm/xe/xe_gt_pagefault.c
+++ b/drivers/gpu/drm/xe/xe_gt_pagefault.c
@@ -237,6 +237,11 @@ static int handle_pagefault(struct xe_gt *gt, struct pagefault *pf)
goto unlock_vm;
}
+ if (xe_vma_read_only(vma) && pf->access_type != ACCESS_TYPE_READ) {
+ err = -EPERM;
+ goto unlock_vm;
+ }
+
atomic = access_is_atomic(pf->access_type);
if (xe_vma_is_cpu_addr_mirror(vma))
--
2.43.0
next prev parent reply other threads:[~2025-03-24 23:09 UTC|newest]
Thread overview: 20+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-03-24 23:09 [PATCH v12 0/5] drm/xe/xe_vm: Implement xe_vm_get_property_ioctl Jonathan Cavitt
2025-03-24 23:09 ` Jonathan Cavitt [this message]
2025-03-24 23:09 ` [PATCH v12 2/5] drm/xe/xe_gt_pagefault: Move pagefault struct to header Jonathan Cavitt
2025-03-24 23:09 ` [PATCH v12 3/5] drm/xe/uapi: Define drm_xe_vm_get_property Jonathan Cavitt
2025-03-24 23:09 ` [PATCH v12 4/5] drm/xe/xe_vm: Add per VM fault info Jonathan Cavitt
2025-03-24 23:09 ` [PATCH v12 5/5] drm/xe/xe_vm: Implement xe_vm_get_property_ioctl Jonathan Cavitt
2025-03-25 7:39 ` Raag Jadav
2025-03-25 14:45 ` Cavitt, Jonathan
2025-03-25 23:45 ` Raag Jadav
2025-03-26 14:21 ` Cavitt, Jonathan
2025-03-26 15:30 ` Raag Jadav
2025-03-25 18:09 ` Jianxun Zhang
2025-03-24 23:41 ` ✓ CI.Patch_applied: success for drm/xe/xe_vm: Implement xe_vm_get_property_ioctl (rev13) Patchwork
2025-03-24 23:41 ` ✗ CI.checkpatch: warning " Patchwork
2025-03-24 23:43 ` ✓ CI.KUnit: success " Patchwork
2025-03-24 23:59 ` ✓ CI.Build: " Patchwork
2025-03-25 0:01 ` ✗ CI.Hooks: failure " Patchwork
2025-03-25 0:03 ` ✓ CI.checksparse: success " Patchwork
2025-03-25 0:24 ` ✓ Xe.CI.BAT: " Patchwork
2025-03-25 5:11 ` ✗ Xe.CI.Full: failure " Patchwork
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20250324230931.63840-2-jonathan.cavitt@intel.com \
--to=jonathan.cavitt@intel.com \
--cc=Michal.Wajdeczko@intel.com \
--cc=alex.zuo@intel.com \
--cc=dri-devel@lists.freedesktop.org \
--cc=intel-xe@lists.freedesktop.org \
--cc=jianxun.zhang@intel.com \
--cc=joonas.lahtinen@linux.intel.com \
--cc=matthew.brost@intel.com \
--cc=michal.mrozek@intel.com \
--cc=raag.jadav@intel.com \
--cc=saurabhg.gupta@intel.com \
--cc=shuicheng.lin@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox