Intel-XE Archive on lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH] drm/xe: Add bounds check for num_binds to prevent memory exhaustion
@ 2026-05-06 18:06 Ramesh Adhikari
  2026-05-06 19:28 ` Matthew Brost
  2026-05-07 12:55 ` ✗ LGCI.VerificationFailed: failure for drm/xe: Add bounds check for num_binds to prevent memory exhaustion (rev2) Patchwork
  0 siblings, 2 replies; 5+ messages in thread
From: Ramesh Adhikari @ 2026-05-06 18:06 UTC (permalink / raw)
  To: intel-xe
  Cc: matthew.brost, thomas.hellstrom, rodrigo.vivi, stable,
	Ramesh Adhikari

The xe_vm_bind_ioctl function accepts user-controlled num_binds without

bounds checking, allowing arbitrarily large memory allocations. This

follows the same vulnerability pattern that was fixed for num_syncs in

commit 8e461304009d ("drm/xe: Limit num_syncs to prevent huge allocations").

Add DRM_XE_MAX_BINDS (1024) limit and validate num_binds before allocation,

matching the num_syncs fix pattern.

Similar unbounded allocations exist for num_mem_ranges and OA n_regs,

which should be addressed in follow-up patches.

Cc: stable@vger.kernel.org

Signed-off-by: Ramesh <adhikari.resume@gmail.com>
---
 drivers/gpu/drm/xe/xe_vm.c | 5 +++++
 include/uapi/drm/xe_drm.h  | 1 +
 2 files changed, 6 insertions(+)

diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
index a717a2b8dea..1ff66874f43 100644
--- a/drivers/gpu/drm/xe/xe_vm.c
+++ b/drivers/gpu/drm/xe/xe_vm.c
@@ -3841,6 +3841,11 @@ int xe_vm_bind_ioctl(struct drm_device *dev, void *data, struct drm_file *file)
 		return -EINVAL;
 
 	err = vm_bind_ioctl_check_args(xe, vm, args, &bind_ops);
+
+	if (XE_IOCTL_DBG(xe, args->num_binds > DRM_XE_MAX_BINDS)) {
+		err = -EINVAL;
+		goto put_vm;
+	}
 	if (err)
 		goto put_vm;
 
diff --git a/include/uapi/drm/xe_drm.h b/include/uapi/drm/xe_drm.h
index ae2fda23ce7..804ccb23b11 100644
--- a/include/uapi/drm/xe_drm.h
+++ b/include/uapi/drm/xe_drm.h
@@ -1606,6 +1606,7 @@ struct drm_xe_exec {
 	__u32 exec_queue_id;
 
 #define DRM_XE_MAX_SYNCS 1024
+#define DRM_XE_MAX_BINDS 1024
 	/** @num_syncs: Amount of struct drm_xe_sync in array. */
 	__u32 num_syncs;
 
-- 
2.43.0


^ permalink raw reply related	[flat|nested] 5+ messages in thread

* Re: [PATCH] drm/xe: Add bounds check for num_binds to prevent memory exhaustion
  2026-05-06 18:06 [PATCH] drm/xe: Add bounds check for num_binds to prevent memory exhaustion Ramesh Adhikari
@ 2026-05-06 19:28 ` Matthew Brost
  2026-05-07  6:31   ` Thomas Hellström
  2026-05-07 12:55 ` ✗ LGCI.VerificationFailed: failure for drm/xe: Add bounds check for num_binds to prevent memory exhaustion (rev2) Patchwork
  1 sibling, 1 reply; 5+ messages in thread
From: Matthew Brost @ 2026-05-06 19:28 UTC (permalink / raw)
  To: Ramesh Adhikari; +Cc: intel-xe, thomas.hellstrom, rodrigo.vivi, stable

On Wed, May 06, 2026 at 11:36:36PM +0530, Ramesh Adhikari wrote:
> The xe_vm_bind_ioctl function accepts user-controlled num_binds without
> 
> bounds checking, allowing arbitrarily large memory allocations. This
> 
> follows the same vulnerability pattern that was fixed for num_syncs in
> 
> commit 8e461304009d ("drm/xe: Limit num_syncs to prevent huge allocations").
> 

The difference here is we issues kvmalloc (2G) vs kmalloc (4M) in the
sync case. So still possible a user triggers kvmalloc over 2G...

> Add DRM_XE_MAX_BINDS (1024) limit and validate num_binds before allocation,
> 
> matching the num_syncs fix pattern.
> 
> Similar unbounded allocations exist for num_mem_ranges and OA n_regs,
> 
> which should be addressed in follow-up patches.
> 
> Cc: stable@vger.kernel.org
> 
> Signed-off-by: Ramesh <adhikari.resume@gmail.com>
> ---
>  drivers/gpu/drm/xe/xe_vm.c | 5 +++++
>  include/uapi/drm/xe_drm.h  | 1 +
>  2 files changed, 6 insertions(+)
> 
> diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
> index a717a2b8dea..1ff66874f43 100644
> --- a/drivers/gpu/drm/xe/xe_vm.c
> +++ b/drivers/gpu/drm/xe/xe_vm.c
> @@ -3841,6 +3841,11 @@ int xe_vm_bind_ioctl(struct drm_device *dev, void *data, struct drm_file *file)
>  		return -EINVAL;
>  
>  	err = vm_bind_ioctl_check_args(xe, vm, args, &bind_ops);
> +
> +	if (XE_IOCTL_DBG(xe, args->num_binds > DRM_XE_MAX_BINDS)) {
> +		err = -EINVAL;kvmalloc
> +		goto put_vm;
> +	}

We had something like this early Xe, IIRC, the max was 512 but we found
for Vk / Mesa they will a huge number in an array of binds. So 1k likely
isn't enough and this patch would be considered uAPI regression, so this
as is a no go. Maybe we can figure out some reasonable upper bound (64k,
128k), idk.

Matt

>  	if (err)
>  		goto put_vm;
>  
> diff --git a/include/uapi/drm/xe_drm.h b/include/uapi/drm/xe_drm.h
> index ae2fda23ce7..804ccb23b11 100644
> --- a/include/uapi/drm/xe_drm.h
> +++ b/include/uapi/drm/xe_drm.h
> @@ -1606,6 +1606,7 @@ struct drm_xe_exec {
>  	__u32 exec_queue_id;
>  
>  #define DRM_XE_MAX_SYNCS 1024
> +#define DRM_XE_MAX_BINDS 1024
>  	/** @num_syncs: Amount of struct drm_xe_sync in array. */
>  	__u32 num_syncs;
>  
> -- 
> 2.43.0
> 

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH] drm/xe: Add bounds check for num_binds to prevent memory exhaustion
  2026-05-06 19:28 ` Matthew Brost
@ 2026-05-07  6:31   ` Thomas Hellström
  2026-05-07  6:50     ` Matthew Brost
  0 siblings, 1 reply; 5+ messages in thread
From: Thomas Hellström @ 2026-05-07  6:31 UTC (permalink / raw)
  To: Matthew Brost, Ramesh Adhikari; +Cc: intel-xe, rodrigo.vivi, stable

On Wed, 2026-05-06 at 12:28 -0700, Matthew Brost wrote:
> On Wed, May 06, 2026 at 11:36:36PM +0530, Ramesh Adhikari wrote:
> > The xe_vm_bind_ioctl function accepts user-controlled num_binds
> > without
> > 
> > bounds checking, allowing arbitrarily large memory allocations.
> > This
> > 
> > follows the same vulnerability pattern that was fixed for num_syncs
> > in
> > 
> > commit 8e461304009d ("drm/xe: Limit num_syncs to prevent huge
> > allocations").
> > 
> 
> The difference here is we issues kvmalloc (2G) vs kmalloc (4M) in the
> sync case. So still possible a user triggers kvmalloc over 2G...
> 
> > Add DRM_XE_MAX_BINDS (1024) limit and validate num_binds before
> > allocation,
> > 
> > matching the num_syncs fix pattern.
> > 
> > Similar unbounded allocations exist for num_mem_ranges and OA
> > n_regs,
> > 
> > which should be addressed in follow-up patches.
> > 
> > Cc: stable@vger.kernel.org
> > 
> > Signed-off-by: Ramesh <adhikari.resume@gmail.com>
> > ---
> >  drivers/gpu/drm/xe/xe_vm.c | 5 +++++
> >  include/uapi/drm/xe_drm.h  | 1 +
> >  2 files changed, 6 insertions(+)
> > 
> > diff --git a/drivers/gpu/drm/xe/xe_vm.c
> > b/drivers/gpu/drm/xe/xe_vm.c
> > index a717a2b8dea..1ff66874f43 100644
> > --- a/drivers/gpu/drm/xe/xe_vm.c
> > +++ b/drivers/gpu/drm/xe/xe_vm.c
> > @@ -3841,6 +3841,11 @@ int xe_vm_bind_ioctl(struct drm_device *dev,
> > void *data, struct drm_file *file)
> >  		return -EINVAL;
> >  
> >  	err = vm_bind_ioctl_check_args(xe, vm, args, &bind_ops);
> > +
> > +	if (XE_IOCTL_DBG(xe, args->num_binds > DRM_XE_MAX_BINDS))
> > {
> > +		err = -EINVAL;kvmalloc
> > +		goto put_vm;
> > +	}
> 
> We had something like this early Xe, IIRC, the max was 512 but we
> found
> for Vk / Mesa they will a huge number in an array of binds. So 1k
> likely
> isn't enough and this patch would be considered uAPI regression, so
> this
> as is a no go. Maybe we can figure out some reasonable upper bound
> (64k,
> 128k), idk.

IIRC we debated this back and forth. The challenging argument was that
if we consume all memory we'd get an error back, which is sort of true
but then we should've really made sure that all memory allocated was
also accounted against the cgroup, with __GFP_ACCOUNT. We only did that
for one large allocation.

But I think we made sure to avoid future regressions (functional, not
performance) by requiring UMD to handle -ENOBUFS, meaning "split the
array bind and retry". So whatever limit we come up with we should not
return -EINVAL but -ENOBUFS. 

Thanks,
Thomas



> 
> Matt
> 
> >  	if (err)
> >  		goto put_vm;
> >  
> > diff --git a/include/uapi/drm/xe_drm.h b/include/uapi/drm/xe_drm.h
> > index ae2fda23ce7..804ccb23b11 100644
> > --- a/include/uapi/drm/xe_drm.h
> > +++ b/include/uapi/drm/xe_drm.h
> > @@ -1606,6 +1606,7 @@ struct drm_xe_exec {
> >  	__u32 exec_queue_id;
> >  
> >  #define DRM_XE_MAX_SYNCS 1024
> > +#define DRM_XE_MAX_BINDS 1024
> >  	/** @num_syncs: Amount of struct drm_xe_sync in array. */
> >  	__u32 num_syncs;
> >  
> > -- 
> > 2.43.0
> > 

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH] drm/xe: Add bounds check for num_binds to prevent memory exhaustion
  2026-05-07  6:31   ` Thomas Hellström
@ 2026-05-07  6:50     ` Matthew Brost
  0 siblings, 0 replies; 5+ messages in thread
From: Matthew Brost @ 2026-05-07  6:50 UTC (permalink / raw)
  To: Thomas Hellström; +Cc: Ramesh Adhikari, intel-xe, rodrigo.vivi, stable

On Thu, May 07, 2026 at 08:31:40AM +0200, Thomas Hellström wrote:
> On Wed, 2026-05-06 at 12:28 -0700, Matthew Brost wrote:
> > On Wed, May 06, 2026 at 11:36:36PM +0530, Ramesh Adhikari wrote:
> > > The xe_vm_bind_ioctl function accepts user-controlled num_binds
> > > without
> > > 
> > > bounds checking, allowing arbitrarily large memory allocations.
> > > This
> > > 
> > > follows the same vulnerability pattern that was fixed for num_syncs
> > > in
> > > 
> > > commit 8e461304009d ("drm/xe: Limit num_syncs to prevent huge
> > > allocations").
> > > 
> > 
> > The difference here is we issues kvmalloc (2G) vs kmalloc (4M) in the
> > sync case. So still possible a user triggers kvmalloc over 2G...
> > 
> > > Add DRM_XE_MAX_BINDS (1024) limit and validate num_binds before
> > > allocation,
> > > 
> > > matching the num_syncs fix pattern.
> > > 
> > > Similar unbounded allocations exist for num_mem_ranges and OA
> > > n_regs,
> > > 
> > > which should be addressed in follow-up patches.
> > > 
> > > Cc: stable@vger.kernel.org
> > > 
> > > Signed-off-by: Ramesh <adhikari.resume@gmail.com>
> > > ---
> > >  drivers/gpu/drm/xe/xe_vm.c | 5 +++++
> > >  include/uapi/drm/xe_drm.h  | 1 +
> > >  2 files changed, 6 insertions(+)
> > > 
> > > diff --git a/drivers/gpu/drm/xe/xe_vm.c
> > > b/drivers/gpu/drm/xe/xe_vm.c
> > > index a717a2b8dea..1ff66874f43 100644
> > > --- a/drivers/gpu/drm/xe/xe_vm.c
> > > +++ b/drivers/gpu/drm/xe/xe_vm.c
> > > @@ -3841,6 +3841,11 @@ int xe_vm_bind_ioctl(struct drm_device *dev,
> > > void *data, struct drm_file *file)
> > >  		return -EINVAL;
> > >  
> > >  	err = vm_bind_ioctl_check_args(xe, vm, args, &bind_ops);
> > > +
> > > +	if (XE_IOCTL_DBG(xe, args->num_binds > DRM_XE_MAX_BINDS))
> > > {
> > > +		err = -EINVAL;kvmalloc
> > > +		goto put_vm;
> > > +	}
> > 
> > We had something like this early Xe, IIRC, the max was 512 but we
> > found
> > for Vk / Mesa they will a huge number in an array of binds. So 1k
> > likely
> > isn't enough and this patch would be considered uAPI regression, so
> > this
> > as is a no go. Maybe we can figure out some reasonable upper bound
> > (64k,
> > 128k), idk.
> 
> IIRC we debated this back and forth. The challenging argument was that
> if we consume all memory we'd get an error back, which is sort of true
> but then we should've really made sure that all memory allocated was
> also accounted against the cgroup, with __GFP_ACCOUNT. We only did that
> for one large allocation.
> 
> But I think we made sure to avoid future regressions (functional, not
> performance) by requiring UMD to handle -ENOBUFS, meaning "split the
> array bind and retry". So whatever limit we come up with we should not
> return -EINVAL but -ENOBUFS. 

Yes, we can currently hit -ENOBUFS on large array of binds when we run
out space for instructions in batch buffers programming the bind and
Mesa gracefully handles this breaking down an array into individual
binds.

Matt 

> 
> Thanks,
> Thomas
> 
> 
> 
> > 
> > Matt
> > 
> > >  	if (err)
> > >  		goto put_vm;
> > >  
> > > diff --git a/include/uapi/drm/xe_drm.h b/include/uapi/drm/xe_drm.h
> > > index ae2fda23ce7..804ccb23b11 100644
> > > --- a/include/uapi/drm/xe_drm.h
> > > +++ b/include/uapi/drm/xe_drm.h
> > > @@ -1606,6 +1606,7 @@ struct drm_xe_exec {
> > >  	__u32 exec_queue_id;
> > >  
> > >  #define DRM_XE_MAX_SYNCS 1024
> > > +#define DRM_XE_MAX_BINDS 1024
> > >  	/** @num_syncs: Amount of struct drm_xe_sync in array. */
> > >  	__u32 num_syncs;
> > >  
> > > -- 
> > > 2.43.0
> > > 

^ permalink raw reply	[flat|nested] 5+ messages in thread

* ✗ LGCI.VerificationFailed: failure for drm/xe: Add bounds check for num_binds to prevent memory exhaustion (rev2)
  2026-05-06 18:06 [PATCH] drm/xe: Add bounds check for num_binds to prevent memory exhaustion Ramesh Adhikari
  2026-05-06 19:28 ` Matthew Brost
@ 2026-05-07 12:55 ` Patchwork
  1 sibling, 0 replies; 5+ messages in thread
From: Patchwork @ 2026-05-07 12:55 UTC (permalink / raw)
  To: Ramesh Adhikari; +Cc: intel-xe

== Series Details ==

Series: drm/xe: Add bounds check for num_binds to prevent memory exhaustion (rev2)
URL   : https://patchwork.freedesktop.org/series/166134/
State : failure

== Summary ==

Address 'adhikari.resume@gmail.com' is not on the allowlist, which prevents CI from being triggered for this patch.
If you want Intel GFX CI to accept this address, please contact the script maintainers at i915-ci-infra@lists.freedesktop.org.
Exception occurred during validation, bailing out!



^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2026-05-07 12:55 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-05-06 18:06 [PATCH] drm/xe: Add bounds check for num_binds to prevent memory exhaustion Ramesh Adhikari
2026-05-06 19:28 ` Matthew Brost
2026-05-07  6:31   ` Thomas Hellström
2026-05-07  6:50     ` Matthew Brost
2026-05-07 12:55 ` ✗ LGCI.VerificationFailed: failure for drm/xe: Add bounds check for num_binds to prevent memory exhaustion (rev2) Patchwork

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox