* [PATCH] drm/xe: Return -ENOBUFS if a kmalloc fails which is tied to an array of binds
@ 2024-07-19 17:23 Matthew Brost
2024-07-19 17:27 ` ✓ CI.Patch_applied: success for " Patchwork
` (4 more replies)
0 siblings, 5 replies; 9+ messages in thread
From: Matthew Brost @ 2024-07-19 17:23 UTC (permalink / raw)
To: intel-xe; +Cc: paulo.r.zanoni
The size of an array of binds is directly tied to several kmalloc in the
KMD, thus making these kmalloc more likely to fail. Return -ENOBUFS in
the case of these failures.
The expected UMD behavior upon returning -ENOBUFS is to split an array
of binds into a series of single binds.
Cc: Paulo Zanoni <paulo.r.zanoni@intel.com>
Signed-off-by: Matthew Brost <matthew.brost@intel.com>
---
drivers/gpu/drm/xe/xe_vm.c | 12 ++++++------
1 file changed, 6 insertions(+), 6 deletions(-)
diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
index 3fde2c8292ad..b715883f40d8 100644
--- a/drivers/gpu/drm/xe/xe_vm.c
+++ b/drivers/gpu/drm/xe/xe_vm.c
@@ -718,7 +718,7 @@ int xe_vm_userptr_check_repin(struct xe_vm *vm)
list_empty_careful(&vm->userptr.invalidated)) ? 0 : -EAGAIN;
}
-static int xe_vma_ops_alloc(struct xe_vma_ops *vops)
+static int xe_vma_ops_alloc(struct xe_vma_ops *vops, bool array_of_binds)
{
int i;
@@ -731,7 +731,7 @@ static int xe_vma_ops_alloc(struct xe_vma_ops *vops)
sizeof(*vops->pt_update_ops[i].ops),
GFP_KERNEL);
if (!vops->pt_update_ops[i].ops)
- return -ENOMEM;
+ return array_of_binds ? -ENOBUFS : -ENOMEM;
}
return 0;
@@ -824,7 +824,7 @@ int xe_vm_rebind(struct xe_vm *vm, bool rebind_worker)
goto free_ops;
}
- err = xe_vma_ops_alloc(&vops);
+ err = xe_vma_ops_alloc(&vops, false);
if (err)
goto free_ops;
@@ -871,7 +871,7 @@ struct dma_fence *xe_vma_rebind(struct xe_vm *vm, struct xe_vma *vma, u8 tile_ma
if (err)
return ERR_PTR(err);
- err = xe_vma_ops_alloc(&vops);
+ err = xe_vma_ops_alloc(&vops, false);
if (err) {
fence = ERR_PTR(err);
goto free_ops;
@@ -2765,7 +2765,7 @@ static int vm_bind_ioctl_check_args(struct xe_device *xe,
sizeof(struct drm_xe_vm_bind_op),
GFP_KERNEL | __GFP_ACCOUNT);
if (!*bind_ops)
- return -ENOMEM;
+ return args->num_binds > 1 ? -ENOBUFS : -ENOMEM;
err = __copy_from_user(*bind_ops, bind_user,
sizeof(struct drm_xe_vm_bind_op) *
@@ -3104,7 +3104,7 @@ int xe_vm_bind_ioctl(struct drm_device *dev, void *data, struct drm_file *file)
goto unwind_ops;
}
- err = xe_vma_ops_alloc(&vops);
+ err = xe_vma_ops_alloc(&vops, args->num_binds > 1);
if (err)
goto unwind_ops;
--
2.34.1
^ permalink raw reply related [flat|nested] 9+ messages in thread
* ✓ CI.Patch_applied: success for drm/xe: Return -ENOBUFS if a kmalloc fails which is tied to an array of binds
2024-07-19 17:23 [PATCH] drm/xe: Return -ENOBUFS if a kmalloc fails which is tied to an array of binds Matthew Brost
@ 2024-07-19 17:27 ` Patchwork
2024-07-19 17:28 ` ✓ CI.checkpatch: " Patchwork
` (3 subsequent siblings)
4 siblings, 0 replies; 9+ messages in thread
From: Patchwork @ 2024-07-19 17:27 UTC (permalink / raw)
To: Matthew Brost; +Cc: intel-xe
== Series Details ==
Series: drm/xe: Return -ENOBUFS if a kmalloc fails which is tied to an array of binds
URL : https://patchwork.freedesktop.org/series/136292/
State : success
== Summary ==
=== Applying kernel patches on branch 'drm-tip' with base: ===
Base commit: eb6045a759ea drm-tip: 2024y-07m-19d-11h-07m-10s UTC integration manifest
=== git am output follows ===
Applying: drm/xe: Return -ENOBUFS if a kmalloc fails which is tied to an array of binds
^ permalink raw reply [flat|nested] 9+ messages in thread
* ✓ CI.checkpatch: success for drm/xe: Return -ENOBUFS if a kmalloc fails which is tied to an array of binds
2024-07-19 17:23 [PATCH] drm/xe: Return -ENOBUFS if a kmalloc fails which is tied to an array of binds Matthew Brost
2024-07-19 17:27 ` ✓ CI.Patch_applied: success for " Patchwork
@ 2024-07-19 17:28 ` Patchwork
2024-07-19 17:28 ` ✗ CI.KUnit: failure " Patchwork
` (2 subsequent siblings)
4 siblings, 0 replies; 9+ messages in thread
From: Patchwork @ 2024-07-19 17:28 UTC (permalink / raw)
To: Matthew Brost; +Cc: intel-xe
== Series Details ==
Series: drm/xe: Return -ENOBUFS if a kmalloc fails which is tied to an array of binds
URL : https://patchwork.freedesktop.org/series/136292/
State : success
== Summary ==
+ KERNEL=/kernel
+ git clone https://gitlab.freedesktop.org/drm/maintainer-tools mt
Cloning into 'mt'...
warning: redirecting to https://gitlab.freedesktop.org/drm/maintainer-tools.git/
+ git -C mt rev-list -n1 origin/master
5ce3e132caaa5b45e5e50201b574a097d130967c
+ cd /kernel
+ git config --global --add safe.directory /kernel
+ git log -n1
commit 5dfe5acf875a1c4e0aef515ec8a8d474c6788150
Author: Matthew Brost <matthew.brost@intel.com>
Date: Fri Jul 19 10:23:34 2024 -0700
drm/xe: Return -ENOBUFS if a kmalloc fails which is tied to an array of binds
The size of an array of binds is directly tied to several kmalloc in the
KMD, thus making these kmalloc more likely to fail. Return -ENOBUFS in
the case of these failures.
The expected UMD behavior upon returning -ENOBUFS is to split an array
of binds into a series of single binds.
Cc: Paulo Zanoni <paulo.r.zanoni@intel.com>
Signed-off-by: Matthew Brost <matthew.brost@intel.com>
+ /mt/dim checkpatch eb6045a759ea13e8d159bdaea423e904b9e3717b drm-intel
5dfe5acf875a drm/xe: Return -ENOBUFS if a kmalloc fails which is tied to an array of binds
^ permalink raw reply [flat|nested] 9+ messages in thread
* ✗ CI.KUnit: failure for drm/xe: Return -ENOBUFS if a kmalloc fails which is tied to an array of binds
2024-07-19 17:23 [PATCH] drm/xe: Return -ENOBUFS if a kmalloc fails which is tied to an array of binds Matthew Brost
2024-07-19 17:27 ` ✓ CI.Patch_applied: success for " Patchwork
2024-07-19 17:28 ` ✓ CI.checkpatch: " Patchwork
@ 2024-07-19 17:28 ` Patchwork
2024-07-19 18:27 ` [PATCH] " Cavitt, Jonathan
2024-07-20 19:04 ` Ghimiray, Himal Prasad
4 siblings, 0 replies; 9+ messages in thread
From: Patchwork @ 2024-07-19 17:28 UTC (permalink / raw)
To: Matthew Brost; +Cc: intel-xe
== Series Details ==
Series: drm/xe: Return -ENOBUFS if a kmalloc fails which is tied to an array of binds
URL : https://patchwork.freedesktop.org/series/136292/
State : failure
== Summary ==
+ trap cleanup EXIT
+ /kernel/tools/testing/kunit/kunit.py run --kunitconfig /kernel/drivers/gpu/drm/xe/.kunitconfig
ERROR:root:In file included from ../drivers/gpu/drm/drm_atomic.c:46:
../drivers/gpu/drm/drm_crtc_internal.h:322:6: warning: no previous prototype for ‘drm_panic_is_enabled’ [-Wmissing-prototypes]
322 | bool drm_panic_is_enabled(struct drm_device *dev) {return false; }
| ^~~~~~~~~~~~~~~~~~~~
In file included from ../drivers/gpu/drm/drm_atomic_uapi.c:43:
../drivers/gpu/drm/drm_crtc_internal.h:322:6: warning: no previous prototype for ‘drm_panic_is_enabled’ [-Wmissing-prototypes]
322 | bool drm_panic_is_enabled(struct drm_device *dev) {return false; }
| ^~~~~~~~~~~~~~~~~~~~
In file included from ../drivers/gpu/drm/drm_blend.c:36:
../drivers/gpu/drm/drm_crtc_internal.h:322:6: warning: no previous prototype for ‘drm_panic_is_enabled’ [-Wmissing-prototypes]
322 | bool drm_panic_is_enabled(struct drm_device *dev) {return false; }
| ^~~~~~~~~~~~~~~~~~~~
In file included from ../drivers/gpu/drm/drm_bridge.c:38:
../drivers/gpu/drm/drm_crtc_internal.h:322:6: warning: no previous prototype for ‘drm_panic_is_enabled’ [-Wmissing-prototypes]
322 | bool drm_panic_is_enabled(struct drm_device *dev) {return false; }
| ^~~~~~~~~~~~~~~~~~~~
In file included from ../drivers/gpu/drm/drm_eld.c:11:
../drivers/gpu/drm/drm_crtc_internal.h:322:6: warning: no previous prototype for ‘drm_panic_is_enabled’ [-Wmissing-prototypes]
322 | bool drm_panic_is_enabled(struct drm_device *dev) {return false; }
| ^~~~~~~~~~~~~~~~~~~~
In file included from ../drivers/gpu/drm/drm_client.c:23:
../drivers/gpu/drm/drm_crtc_internal.h:322:6: warning: no previous prototype for ‘drm_panic_is_enabled’ [-Wmissing-prototypes]
322 | bool drm_panic_is_enabled(struct drm_device *dev) {return false; }
| ^~~~~~~~~~~~~~~~~~~~
In file included from ../drivers/gpu/drm/drm_displayid.c:9:
../drivers/gpu/drm/drm_crtc_internal.h:322:6: warning: no previous prototype for ‘drm_panic_is_enabled’ [-Wmissing-prototypes]
322 | bool drm_panic_is_enabled(struct drm_device *dev) {return false; }
| ^~~~~~~~~~~~~~~~~~~~
In file included from ../drivers/gpu/drm/drm_color_mgmt.c:32:
../drivers/gpu/drm/drm_crtc_internal.h:322:6: warning: no previous prototype for ‘drm_panic_is_enabled’ [-Wmissing-prototypes]
322 | bool drm_panic_is_enabled(struct drm_device *dev) {return false; }
| ^~~~~~~~~~~~~~~~~~~~
In file included from ../drivers/gpu/drm/drm_client_modeset.c:26:
../drivers/gpu/drm/drm_crtc_internal.h:322:6: warning: no previous prototype for ‘drm_panic_is_enabled’ [-Wmissing-prototypes]
322 | bool drm_panic_is_enabled(struct drm_device *dev) {return false; }
| ^~~~~~~~~~~~~~~~~~~~
In file included from ../drivers/gpu/drm/drm_connector.c:41:
../drivers/gpu/drm/drm_crtc_internal.h:322:6: warning: no previous prototype for ‘drm_panic_is_enabled’ [-Wmissing-prototypes]
322 | bool drm_panic_is_enabled(struct drm_device *dev) {return false; }
| ^~~~~~~~~~~~~~~~~~~~
In file included from ../drivers/gpu/drm/drm_crtc.c:52:
../drivers/gpu/drm/drm_crtc_internal.h:322:6: warning: no previous prototype for ‘drm_panic_is_enabled’ [-Wmissing-prototypes]
322 | bool drm_panic_is_enabled(struct drm_device *dev) {return false; }
| ^~~~~~~~~~~~~~~~~~~~
In file included from ../drivers/gpu/drm/drm_dumb_buffers.c:31:
../drivers/gpu/drm/drm_crtc_internal.h:322:6: warning: no previous prototype for ‘drm_panic_is_enabled’ [-Wmissing-prototypes]
322 | bool drm_panic_is_enabled(struct drm_device *dev) {return false; }
| ^~~~~~~~~~~~~~~~~~~~
In file included from ../drivers/gpu/drm/drm_drv.c:50:
../drivers/gpu/drm/drm_crtc_internal.h:322:6: warning: no previous prototype for ‘drm_panic_is_enabled’ [-Wmissing-prototypes]
322 | bool drm_panic_is_enabled(struct drm_device *dev) {return false; }
| ^~~~~~~~~~~~~~~~~~~~
In file included from ../drivers/gpu/drm/drm_encoder.c:32:
../drivers/gpu/drm/drm_crtc_internal.h:322:6: warning: no previous prototype for ‘drm_panic_is_enabled’ [-Wmissing-prototypes]
322 | bool drm_panic_is_enabled(struct drm_device *dev) {return false; }
| ^~~~~~~~~~~~~~~~~~~~
In file included from ../drivers/gpu/drm/drm_framebuffer.c:38:
../drivers/gpu/drm/drm_crtc_internal.h:322:6: warning: no previous prototype for ‘drm_panic_is_enabled’ [-Wmissing-prototypes]
322 | bool drm_panic_is_enabled(struct drm_device *dev) {return false; }
| ^~~~~~~~~~~~~~~~~~~~
In file included from ../drivers/gpu/drm/drm_edid.c:49:
../drivers/gpu/drm/drm_crtc_internal.h:322:6: warning: no previous prototype for ‘drm_panic_is_enabled’ [-Wmissing-prototypes]
322 | bool drm_panic_is_enabled(struct drm_device *dev) {return false; }
| ^~~~~~~~~~~~~~~~~~~~
In file included from ../drivers/gpu/drm/drm_file.c:48:
../drivers/gpu/drm/drm_crtc_internal.h:322:6: warning: no previous prototype for ‘drm_panic_is_enabled’ [-Wmissing-prototypes]
322 | bool drm_panic_is_enabled(struct drm_device *dev) {return false; }
| ^~~~~~~~~~~~~~~~~~~~
In file included from ../drivers/gpu/drm/drm_ioctl.c:43:
../drivers/gpu/drm/drm_crtc_internal.h:322:6: warning: no previous prototype for ‘drm_panic_is_enabled’ [-Wmissing-prototypes]
322 | bool drm_panic_is_enabled(struct drm_device *dev) {return false; }
| ^~~~~~~~~~~~~~~~~~~~
In file included from ../drivers/gpu/drm/drm_lease.c:15:
../drivers/gpu/drm/drm_crtc_internal.h:322:6: warning: no previous prototype for ‘drm_panic_is_enabled’ [-Wmissing-prototypes]
322 | bool drm_panic_is_enabled(struct drm_device *dev) {return false; }
| ^~~~~~~~~~~~~~~~~~~~
In file included from ../drivers/gpu/drm/drm_mode_config.c:34:
../drivers/gpu/drm/drm_crtc_internal.h:322:6: warning: no previous prototype for ‘drm_panic_is_enabled’ [-Wmissing-prototypes]
322 | bool drm_panic_is_enabled(struct drm_device *dev) {return false; }
| ^~~~~~~~~~~~~~~~~~~~
In file included from ../drivers/gpu/drm/drm_mode_object.c:33:
../drivers/gpu/drm/drm_crtc_internal.h:322:6: warning: no previous prototype for ‘drm_panic_is_enabled’ [-Wmissing-prototypes]
322 | bool drm_panic_is_enabled(struct drm_device *dev) {return false; }
| ^~~~~~~~~~~~~~~~~~~~
In file included from ../drivers/gpu/drm/drm_modes.c:50:
../drivers/gpu/drm/drm_crtc_internal.h:322:6: warning: no previous prototype for ‘drm_panic_is_enabled’ [-Wmissing-prototypes]
322 | bool drm_panic_is_enabled(struct drm_device *dev) {return false; }
| ^~~~~~~~~~~~~~~~~~~~
In file included from ../drivers/gpu/drm/drm_plane.c:36:
../drivers/gpu/drm/drm_crtc_internal.h:322:6: warning: no previous prototype for ‘drm_panic_is_enabled’ [-Wmissing-prototypes]
322 | bool drm_panic_is_enabled(struct drm_device *dev) {return false; }
| ^~~~~~~~~~~~~~~~~~~~
In file included from ../drivers/gpu/drm/drm_property.c:33:
../drivers/gpu/drm/drm_crtc_internal.h:322:6: warning: no previous prototype for ‘drm_panic_is_enabled’ [-Wmissing-prototypes]
322 | bool drm_panic_is_enabled(struct drm_device *dev) {return false; }
| ^~~~~~~~~~~~~~~~~~~~
In file included from ../drivers/gpu/drm/drm_sysfs.c:34:
../drivers/gpu/drm/drm_crtc_internal.h:322:6: warning: no previous prototype for ‘drm_panic_is_enabled’ [-Wmissing-prototypes]
322 | bool drm_panic_is_enabled(struct drm_device *dev) {return false; }
| ^~~~~~~~~~~~~~~~~~~~
In file included from ../drivers/gpu/drm/drm_debugfs.c:45:
../drivers/gpu/drm/drm_crtc_internal.h:322:6: warning: no previous prototype for ‘drm_panic_is_enabled’ [-Wmissing-prototypes]
322 | bool drm_panic_is_enabled(struct drm_device *dev) {return false; }
| ^~~~~~~~~~~~~~~~~~~~
In file included from ../drivers/gpu/drm/drm_atomic_helper.c:48:
../drivers/gpu/drm/drm_crtc_internal.h:322:6: warning: no previous prototype for ‘drm_panic_is_enabled’ [-Wmissing-prototypes]
322 | bool drm_panic_is_enabled(struct drm_device *dev) {return false; }
| ^~~~~~~~~~~~~~~~~~~~
In file included from ../drivers/gpu/drm/drm_fb_helper.c:47:
../drivers/gpu/drm/drm_crtc_internal.h:322:6: warning: no previous prototype for ‘drm_panic_is_enabled’ [-Wmissing-prototypes]
322 | bool drm_panic_is_enabled(struct drm_device *dev) {return false; }
| ^~~~~~~~~~~~~~~~~~~~
../lib/iomap.c:156:5: warning: no previous prototype for ‘ioread64_lo_hi’ [-Wmissing-prototypes]
156 | u64 ioread64_lo_hi(const void __iomem *addr)
| ^~~~~~~~~~~~~~
../lib/iomap.c:163:5: warning: no previous prototype for ‘ioread64_hi_lo’ [-Wmissing-prototypes]
163 | u64 ioread64_hi_lo(const void __iomem *addr)
| ^~~~~~~~~~~~~~
../lib/iomap.c:170:5: warning: no previous prototype for ‘ioread64be_lo_hi’ [-Wmissing-prototypes]
170 | u64 ioread64be_lo_hi(const void __iomem *addr)
| ^~~~~~~~~~~~~~~~
../lib/iomap.c:178:5: warning: no previous prototype for ‘ioread64be_hi_lo’ [-Wmissing-prototypes]
178 | u64 ioread64be_hi_lo(const void __iomem *addr)
| ^~~~~~~~~~~~~~~~
../lib/iomap.c:264:6: warning: no previous prototype for ‘iowrite64_lo_hi’ [-Wmissing-prototypes]
264 | void iowrite64_lo_hi(u64 val, void __iomem *addr)
| ^~~~~~~~~~~~~~~
../lib/iomap.c:272:6: warning: no previous prototype for ‘iowrite64_hi_lo’ [-Wmissing-prototypes]
272 | void iowrite64_hi_lo(u64 val, void __iomem *addr)
| ^~~~~~~~~~~~~~~
../lib/iomap.c:280:6: warning: no previous prototype for ‘iowrite64be_lo_hi’ [-Wmissing-prototypes]
280 | void iowrite64be_lo_hi(u64 val, void __iomem *addr)
| ^~~~~~~~~~~~~~~~~
../lib/iomap.c:288:6: warning: no previous prototype for ‘iowrite64be_hi_lo’ [-Wmissing-prototypes]
288 | void iowrite64be_hi_lo(u64 val, void __iomem *addr)
| ^~~~~~~~~~~~~~~~~
ld: drivers/gpu/drm/drm_atomic_uapi.o: in function `drm_panic_is_enabled':
drm_atomic_uapi.c:(.text+0x1120): multiple definition of `drm_panic_is_enabled'; drivers/gpu/drm/drm_atomic.o:drm_atomic.c:(.text+0x3130): first defined here
ld: drivers/gpu/drm/drm_blend.o: in function `drm_panic_is_enabled':
drm_blend.c:(.text+0x890): multiple definition of `drm_panic_is_enabled'; drivers/gpu/drm/drm_atomic.o:drm_atomic.c:(.text+0x3130): first defined here
ld: drivers/gpu/drm/drm_bridge.o: in function `drm_panic_is_enabled':
drm_bridge.c:(.text+0x1270): multiple definition of `drm_panic_is_enabled'; drivers/gpu/drm/drm_atomic.o:drm_atomic.c:(.text+0x3130): first defined here
ld: drivers/gpu/drm/drm_client.o: in function `drm_panic_is_enabled':
drm_client.c:(.text+0xbd0): multiple definition of `drm_panic_is_enabled'; drivers/gpu/drm/drm_atomic.o:drm_atomic.c:(.text+0x3130): first defined here
ld: drivers/gpu/drm/drm_client_modeset.o: in function `drm_panic_is_enabled':
drm_client_modeset.c:(.text+0x2bb0): multiple definition of `drm_panic_is_enabled'; drivers/gpu/drm/drm_atomic.o:drm_atomic.c:(.text+0x3130): first defined here
ld: drivers/gpu/drm/drm_color_mgmt.o: in function `drm_panic_is_enabled':
drm_color_mgmt.c:(.text+0x520): multiple definition of `drm_panic_is_enabled'; drivers/gpu/drm/drm_atomic.o:drm_atomic.c:(.text+0x3130): first defined here
ld: drivers/gpu/drm/drm_connector.o: in function `drm_panic_is_enabled':
drm_connector.c:(.text+0x2ae0): multiple definition of `drm_panic_is_enabled'; drivers/gpu/drm/drm_atomic.o:drm_atomic.c:(.text+0x3130): first defined here
ld: drivers/gpu/drm/drm_crtc.o: in function `drm_panic_is_enabled':
drm_crtc.c:(.text+0xd50): multiple definition of `drm_panic_is_enabled'; drivers/gpu/drm/drm_atomic.o:drm_atomic.c:(.text+0x3130): first defined here
ld: drivers/gpu/drm/drm_displayid.o: in function `drm_panic_is_enabled':
drm_displayid.c:(.text+0x0): multiple definition of `drm_panic_is_enabled'; drivers/gpu/drm/drm_atomic.o:drm_atomic.c:(.text+0x3130): first defined here
ld: drivers/gpu/drm/drm_drv.o: in function `drm_panic_is_enabled':
drm_drv.c:(.text+0x1500): multiple definition of `drm_panic_is_enabled'; drivers/gpu/drm/drm_atomic.o:drm_atomic.c:(.text+0x3130): first defined here
ld: drivers/gpu/drm/drm_dumb_buffers.o: in function `drm_panic_is_enabled':
drm_dumb_buffers.c:(.text+0x0): multiple definition of `drm_panic_is_enabled'; drivers/gpu/drm/drm_atomic.o:drm_atomic.c:(.text+0x3130): first defined here
ld: drivers/gpu/drm/drm_edid.o: in function `drm_panic_is_enabled':
drm_edid.c:(.text+0x7fb0): multiple definition of `drm_panic_is_enabled'; drivers/gpu/drm/drm_atomic.o:drm_atomic.c:(.text+0x3130): first defined here
ld: drivers/gpu/drm/drm_eld.o: in function `drm_panic_is_enabled':
drm_eld.c:(.text+0xa0): multiple definition of `drm_panic_is_enabled'; drivers/gpu/drm/drm_atomic.o:drm_atomic.c:(.text+0x3130): first defined here
ld: drivers/gpu/drm/drm_encoder.o: in function `drm_panic_is_enabled':
drm_encoder.c:(.text+0x620): multiple definition of `drm_panic_is_enabled'; drivers/gpu/drm/drm_atomic.o:drm_atomic.c:(.text+0x3130): first defined here
ld: drivers/gpu/drm/drm_file.o: in function `drm_panic_is_enabled':
drm_file.c:(.text+0xc20): multiple definition of `drm_panic_is_enabled'; drivers/gpu/drm/drm_atomic.o:drm_atomic.c:(.text+0x3130): first defined here
ld: drivers/gpu/drm/drm_framebuffer.o: in function `drm_panic_is_enabled':
drm_framebuffer.c:(.text+0xc30): multiple definition of `drm_panic_is_enabled'; drivers/gpu/drm/drm_atomic.o:drm_atomic.c:(.text+0x3130): first defined here
ld: drivers/gpu/drm/drm_ioctl.o: in function `drm_panic_is_enabled':
drm_ioctl.c:(.text+0x1080): multiple definition of `drm_panic_is_enabled'; drivers/gpu/drm/drm_atomic.o:drm_atomic.c:(.text+0x3130): first defined here
ld: drivers/gpu/drm/drm_lease.o: in function `drm_panic_is_enabled':
drm_lease.c:(.text+0x210): multiple definition of `drm_panic_is_enabled'; drivers/gpu/drm/drm_atomic.o:drm_atomic.c:(.text+0x3130): first defined here
ld: drivers/gpu/drm/drm_mode_config.o: in function `drm_panic_is_enabled':
drm_mode_config.c:(.text+0xfe0): multiple definition of `drm_panic_is_enabled'; drivers/gpu/drm/drm_atomic.o:drm_atomic.c:(.text+0x3130): first defined here
ld: drivers/gpu/drm/drm_mode_object.o: in function `drm_panic_is_enabled':
drm_mode_object.c:(.text+0x8b0): multiple definition of `drm_panic_is_enabled'; drivers/gpu/drm/drm_atomic.o:drm_atomic.c:(.text+0x3130): first defined here
ld: drivers/gpu/drm/drm_modes.o: in function `drm_panic_is_enabled':
drm_modes.c:(.text+0x2e10): multiple definition of `drm_panic_is_enabled'; drivers/gpu/drm/drm_atomic.o:drm_atomic.c:(.text+0x3130): first defined here
ld: drivers/gpu/drm/drm_plane.o: in function `drm_panic_is_enabled':
drm_plane.c:(.text+0x20a0): multiple definition of `drm_panic_is_enabled'; drivers/gpu/drm/drm_atomic.o:drm_atomic.c:(.text+0x3130): first defined here
ld: drivers/gpu/drm/drm_property.o: in function `drm_panic_is_enabled':
drm_property.c:(.text+0xe10): multiple definition of `drm_panic_is_enabled'; drivers/gpu/drm/drm_atomic.o:drm_atomic.c:(.text+0x3130): first defined here
ld: drivers/gpu/drm/drm_sysfs.o: in function `drm_panic_is_enabled':
drm_sysfs.c:(.text+0x8b0): multiple definition of `drm_panic_is_enabled'; drivers/gpu/drm/drm_atomic.o:drm_atomic.c:(.text+0x3130): first defined here
ld: drivers/gpu/drm/drm_debugfs.o: in function `drm_panic_is_enabled':
drm_debugfs.c:(.text+0x1370): multiple definition of `drm_panic_is_enabled'; drivers/gpu/drm/drm_atomic.o:drm_atomic.c:(.text+0x3130): first defined here
ld: drivers/gpu/drm/drm_atomic_helper.o: in function `drm_panic_is_enabled':
drm_atomic_helper.c:(.text+0x69c0): multiple definition of `drm_panic_is_enabled'; drivers/gpu/drm/drm_atomic.o:drm_atomic.c:(.text+0x3130): first defined here
ld: drivers/gpu/drm/drm_fb_helper.o: in function `drm_panic_is_enabled':
drm_fb_helper.c:(.text+0x2ea0): multiple definition of `drm_panic_is_enabled'; drivers/gpu/drm/drm_atomic.o:drm_atomic.c:(.text+0x3130): first defined here
make[3]: *** [../scripts/Makefile.vmlinux_o:62: vmlinux.o] Error 1
make[2]: *** [/kernel/Makefile:1152: vmlinux_o] Error 2
make[1]: *** [/kernel/Makefile:240: __sub-make] Error 2
make: *** [Makefile:240: __sub-make] Error 2
[17:28:01] Configuring KUnit Kernel ...
Generating .config ...
Populating config with:
$ make ARCH=um O=.kunit olddefconfig
[17:28:05] Building KUnit Kernel ...
Populating config with:
$ make ARCH=um O=.kunit olddefconfig
Building with:
$ make ARCH=um O=.kunit --jobs=48
+ cleanup
++ stat -c %u:%g /kernel
+ chown -R 1003:1003 /kernel
^ permalink raw reply [flat|nested] 9+ messages in thread
* RE: [PATCH] drm/xe: Return -ENOBUFS if a kmalloc fails which is tied to an array of binds
2024-07-19 17:23 [PATCH] drm/xe: Return -ENOBUFS if a kmalloc fails which is tied to an array of binds Matthew Brost
` (2 preceding siblings ...)
2024-07-19 17:28 ` ✗ CI.KUnit: failure " Patchwork
@ 2024-07-19 18:27 ` Cavitt, Jonathan
2024-07-20 19:04 ` Ghimiray, Himal Prasad
4 siblings, 0 replies; 9+ messages in thread
From: Cavitt, Jonathan @ 2024-07-19 18:27 UTC (permalink / raw)
To: Brost, Matthew, intel-xe@lists.freedesktop.org
Cc: Zanoni, Paulo R, Cavitt, Jonathan
-----Original Message-----
From: Intel-xe <intel-xe-bounces@lists.freedesktop.org> On Behalf Of Matthew Brost
Sent: Friday, July 19, 2024 10:24 AM
To: intel-xe@lists.freedesktop.org
Cc: Zanoni, Paulo R <paulo.r.zanoni@intel.com>
Subject: [PATCH] drm/xe: Return -ENOBUFS if a kmalloc fails which is tied to an array of binds
>
> The size of an array of binds is directly tied to several kmalloc in the
> KMD, thus making these kmalloc more likely to fail. Return -ENOBUFS in
> the case of these failures.
>
> The expected UMD behavior upon returning -ENOBUFS is to split an array
> of binds into a series of single binds.
>
> Cc: Paulo Zanoni <paulo.r.zanoni@intel.com>
> Signed-off-by: Matthew Brost <matthew.brost@intel.com>
LGTM.
Reviewed-by: Jonathan Cavitt <jonathan.cavitt@intel.com>
-Jonathan Cavitt
> ---
> drivers/gpu/drm/xe/xe_vm.c | 12 ++++++------
> 1 file changed, 6 insertions(+), 6 deletions(-)
>
> diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
> index 3fde2c8292ad..b715883f40d8 100644
> --- a/drivers/gpu/drm/xe/xe_vm.c
> +++ b/drivers/gpu/drm/xe/xe_vm.c
> @@ -718,7 +718,7 @@ int xe_vm_userptr_check_repin(struct xe_vm *vm)
> list_empty_careful(&vm->userptr.invalidated)) ? 0 : -EAGAIN;
> }
>
> -static int xe_vma_ops_alloc(struct xe_vma_ops *vops)
> +static int xe_vma_ops_alloc(struct xe_vma_ops *vops, bool array_of_binds)
> {
> int i;
>
> @@ -731,7 +731,7 @@ static int xe_vma_ops_alloc(struct xe_vma_ops *vops)
> sizeof(*vops->pt_update_ops[i].ops),
> GFP_KERNEL);
> if (!vops->pt_update_ops[i].ops)
> - return -ENOMEM;
> + return array_of_binds ? -ENOBUFS : -ENOMEM;
> }
>
> return 0;
> @@ -824,7 +824,7 @@ int xe_vm_rebind(struct xe_vm *vm, bool rebind_worker)
> goto free_ops;
> }
>
> - err = xe_vma_ops_alloc(&vops);
> + err = xe_vma_ops_alloc(&vops, false);
> if (err)
> goto free_ops;
>
> @@ -871,7 +871,7 @@ struct dma_fence *xe_vma_rebind(struct xe_vm *vm, struct xe_vma *vma, u8 tile_ma
> if (err)
> return ERR_PTR(err);
>
> - err = xe_vma_ops_alloc(&vops);
> + err = xe_vma_ops_alloc(&vops, false);
> if (err) {
> fence = ERR_PTR(err);
> goto free_ops;
> @@ -2765,7 +2765,7 @@ static int vm_bind_ioctl_check_args(struct xe_device *xe,
> sizeof(struct drm_xe_vm_bind_op),
> GFP_KERNEL | __GFP_ACCOUNT);
> if (!*bind_ops)
> - return -ENOMEM;
> + return args->num_binds > 1 ? -ENOBUFS : -ENOMEM;
>
> err = __copy_from_user(*bind_ops, bind_user,
> sizeof(struct drm_xe_vm_bind_op) *
> @@ -3104,7 +3104,7 @@ int xe_vm_bind_ioctl(struct drm_device *dev, void *data, struct drm_file *file)
> goto unwind_ops;
> }
>
> - err = xe_vma_ops_alloc(&vops);
> + err = xe_vma_ops_alloc(&vops, args->num_binds > 1);
> if (err)
> goto unwind_ops;
>
> --
> 2.34.1
>
>
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH] drm/xe: Return -ENOBUFS if a kmalloc fails which is tied to an array of binds
2024-07-19 17:23 [PATCH] drm/xe: Return -ENOBUFS if a kmalloc fails which is tied to an array of binds Matthew Brost
` (3 preceding siblings ...)
2024-07-19 18:27 ` [PATCH] " Cavitt, Jonathan
@ 2024-07-20 19:04 ` Ghimiray, Himal Prasad
2024-07-20 23:14 ` Matthew Brost
4 siblings, 1 reply; 9+ messages in thread
From: Ghimiray, Himal Prasad @ 2024-07-20 19:04 UTC (permalink / raw)
To: Matthew Brost, intel-xe; +Cc: paulo.r.zanoni
On 19-07-2024 22:53, Matthew Brost wrote:
> The size of an array of binds is directly tied to several kmalloc in the
> KMD, thus making these kmalloc more likely to fail. Return -ENOBUFS in
> the case of these failures.
>
> The expected UMD behavior upon returning -ENOBUFS is to split an array
> of binds into a series of single binds.
Would it be appropriate to have some doc/guidelines in the form of
drm_err or kernel doc regarding expected behavior from UMD if the ioctl
returns a -ENOBUFS error ?
>
> Cc: Paulo Zanoni <paulo.r.zanoni@intel.com>
> Signed-off-by: Matthew Brost <matthew.brost@intel.com>
> ---
> drivers/gpu/drm/xe/xe_vm.c | 12 ++++++------
> 1 file changed, 6 insertions(+), 6 deletions(-)
>
> diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
> index 3fde2c8292ad..b715883f40d8 100644
> --- a/drivers/gpu/drm/xe/xe_vm.c
> +++ b/drivers/gpu/drm/xe/xe_vm.c
> @@ -718,7 +718,7 @@ int xe_vm_userptr_check_repin(struct xe_vm *vm)
> list_empty_careful(&vm->userptr.invalidated)) ? 0 : -EAGAIN;
> }
>
> -static int xe_vma_ops_alloc(struct xe_vma_ops *vops)
> +static int xe_vma_ops_alloc(struct xe_vma_ops *vops, bool array_of_binds)
> {
> int i;
>
> @@ -731,7 +731,7 @@ static int xe_vma_ops_alloc(struct xe_vma_ops *vops)
> sizeof(*vops->pt_update_ops[i].ops),
> GFP_KERNEL);
> if (!vops->pt_update_ops[i].ops)
> - return -ENOMEM;
> + return array_of_binds ? -ENOBUFS : -ENOMEM;
> }
>
> return 0;
> @@ -824,7 +824,7 @@ int xe_vm_rebind(struct xe_vm *vm, bool rebind_worker)
> goto free_ops;
> }
>
> - err = xe_vma_ops_alloc(&vops);
> + err = xe_vma_ops_alloc(&vops, false);
> if (err)
> goto free_ops;
>
> @@ -871,7 +871,7 @@ struct dma_fence *xe_vma_rebind(struct xe_vm *vm, struct xe_vma *vma, u8 tile_ma
> if (err)
> return ERR_PTR(err);
>
> - err = xe_vma_ops_alloc(&vops);
> + err = xe_vma_ops_alloc(&vops, false);
> if (err) {
> fence = ERR_PTR(err);
> goto free_ops;
> @@ -2765,7 +2765,7 @@ static int vm_bind_ioctl_check_args(struct xe_device *xe,
> sizeof(struct drm_xe_vm_bind_op),
> GFP_KERNEL | __GFP_ACCOUNT);
> if (!*bind_ops)
> - return -ENOMEM;
> + return args->num_binds > 1 ? -ENOBUFS : -ENOMEM;
>
> err = __copy_from_user(*bind_ops, bind_user,
> sizeof(struct drm_xe_vm_bind_op) *
> @@ -3104,7 +3104,7 @@ int xe_vm_bind_ioctl(struct drm_device *dev, void *data, struct drm_file *file)
> goto unwind_ops;
> }
>
> - err = xe_vma_ops_alloc(&vops);
> + err = xe_vma_ops_alloc(&vops, args->num_binds > 1);
> if (err)
> goto unwind_ops;
>
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH] drm/xe: Return -ENOBUFS if a kmalloc fails which is tied to an array of binds
2024-07-20 19:04 ` Ghimiray, Himal Prasad
@ 2024-07-20 23:14 ` Matthew Brost
2024-07-20 23:20 ` Matthew Brost
0 siblings, 1 reply; 9+ messages in thread
From: Matthew Brost @ 2024-07-20 23:14 UTC (permalink / raw)
To: Ghimiray, Himal Prasad; +Cc: intel-xe, paulo.r.zanoni
On Sun, Jul 21, 2024 at 12:34:27AM +0530, Ghimiray, Himal Prasad wrote:
>
>
> On 19-07-2024 22:53, Matthew Brost wrote:
> > The size of an array of binds is directly tied to several kmalloc in the
> > KMD, thus making these kmalloc more likely to fail. Return -ENOBUFS in
> > the case of these failures.
> >
> > The expected UMD behavior upon returning -ENOBUFS is to split an array
> > of binds into a series of single binds.
>
> Would it be appropriate to have some doc/guidelines in the form of drm_err
> or kernel doc regarding expected behavior from UMD if the ioctl returns a
> -ENOBUFS error ?
>
Yes, this on the todo list as part of error handling cleanup for both
exec and bind IOCTLs. I think kernel doc should go in xe_drm.h with a
list of errno returned and expected UMD actions. Eventually I'd like to
get this in place for all IOCTLs. I was going to work on getting exec
and bind IOCTLs fixed up in the next couple of weeks (we have an
internal doc of required changes) to have it ready for when Thomas is
back (2 more weeks).
I made this change as we already have -ENOBUFS implemented in a
different failure point for array of binds (BB being to large, see
xe_migrate.c) so might as well just finish up this error code to make it
complete as it is fairly simple change. Also Mesa has a MR [1] inflight
to handle -ENOBUFs situations.
Matt
[1] https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/30276
> >
> > Cc: Paulo Zanoni <paulo.r.zanoni@intel.com>
> > Signed-off-by: Matthew Brost <matthew.brost@intel.com>
> > ---
> > drivers/gpu/drm/xe/xe_vm.c | 12 ++++++------
> > 1 file changed, 6 insertions(+), 6 deletions(-)
> >
> > diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
> > index 3fde2c8292ad..b715883f40d8 100644
> > --- a/drivers/gpu/drm/xe/xe_vm.c
> > +++ b/drivers/gpu/drm/xe/xe_vm.c
> > @@ -718,7 +718,7 @@ int xe_vm_userptr_check_repin(struct xe_vm *vm)
> > list_empty_careful(&vm->userptr.invalidated)) ? 0 : -EAGAIN;
> > }
> > -static int xe_vma_ops_alloc(struct xe_vma_ops *vops)
> > +static int xe_vma_ops_alloc(struct xe_vma_ops *vops, bool array_of_binds)
> > {
> > int i;
> > @@ -731,7 +731,7 @@ static int xe_vma_ops_alloc(struct xe_vma_ops *vops)
> > sizeof(*vops->pt_update_ops[i].ops),
> > GFP_KERNEL);
> > if (!vops->pt_update_ops[i].ops)
> > - return -ENOMEM;
> > + return array_of_binds ? -ENOBUFS : -ENOMEM;
> > }
> > return 0;
> > @@ -824,7 +824,7 @@ int xe_vm_rebind(struct xe_vm *vm, bool rebind_worker)
> > goto free_ops;
> > }
> > - err = xe_vma_ops_alloc(&vops);
> > + err = xe_vma_ops_alloc(&vops, false);
> > if (err)
> > goto free_ops;
> > @@ -871,7 +871,7 @@ struct dma_fence *xe_vma_rebind(struct xe_vm *vm, struct xe_vma *vma, u8 tile_ma
> > if (err)
> > return ERR_PTR(err);
> > - err = xe_vma_ops_alloc(&vops);
> > + err = xe_vma_ops_alloc(&vops, false);
> > if (err) {
> > fence = ERR_PTR(err);
> > goto free_ops;
> > @@ -2765,7 +2765,7 @@ static int vm_bind_ioctl_check_args(struct xe_device *xe,
> > sizeof(struct drm_xe_vm_bind_op),
> > GFP_KERNEL | __GFP_ACCOUNT);
> > if (!*bind_ops)
> > - return -ENOMEM;
> > + return args->num_binds > 1 ? -ENOBUFS : -ENOMEM;
> > err = __copy_from_user(*bind_ops, bind_user,
> > sizeof(struct drm_xe_vm_bind_op) *
> > @@ -3104,7 +3104,7 @@ int xe_vm_bind_ioctl(struct drm_device *dev, void *data, struct drm_file *file)
> > goto unwind_ops;
> > }
> > - err = xe_vma_ops_alloc(&vops);
> > + err = xe_vma_ops_alloc(&vops, args->num_binds > 1);
> > if (err)
> > goto unwind_ops;
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH] drm/xe: Return -ENOBUFS if a kmalloc fails which is tied to an array of binds
2024-07-20 23:14 ` Matthew Brost
@ 2024-07-20 23:20 ` Matthew Brost
2024-07-22 3:53 ` Ghimiray, Himal Prasad
0 siblings, 1 reply; 9+ messages in thread
From: Matthew Brost @ 2024-07-20 23:20 UTC (permalink / raw)
To: Ghimiray, Himal Prasad; +Cc: intel-xe, paulo.r.zanoni
On Sat, Jul 20, 2024 at 11:14:36PM +0000, Matthew Brost wrote:
> On Sun, Jul 21, 2024 at 12:34:27AM +0530, Ghimiray, Himal Prasad wrote:
> >
> >
> > On 19-07-2024 22:53, Matthew Brost wrote:
> > > The size of an array of binds is directly tied to several kmalloc in the
> > > KMD, thus making these kmalloc more likely to fail. Return -ENOBUFS in
> > > the case of these failures.
> > >
> > > The expected UMD behavior upon returning -ENOBUFS is to split an array
> > > of binds into a series of single binds.
> >
> > Would it be appropriate to have some doc/guidelines in the form of drm_err
> > or kernel doc regarding expected behavior from UMD if the ioctl returns a
> > -ENOBUFS error ?
> >
>
> Yes, this on the todo list as part of error handling cleanup for both
> exec and bind IOCTLs. I think kernel doc should go in xe_drm.h with a
> list of errno returned and expected UMD actions. Eventually I'd like to
> get this in place for all IOCTLs. I was going to work on getting exec
> and bind IOCTLs fixed up in the next couple of weeks (we have an
> internal doc of required changes) to have it ready for when Thomas is
> back (2 more weeks).
>
> I made this change as we already have -ENOBUFS implemented in a
> different failure point for array of binds (BB being to large, see
> xe_migrate.c) so might as well just finish up this error code to make it
Also see xe_sa.c, search for -ENOBUFS.
> complete as it is fairly simple change. Also Mesa has a MR [1] inflight
> to handle -ENOBUFs situations.
>
> Matt
>
> [1] https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/30276
>
> > >
> > > Cc: Paulo Zanoni <paulo.r.zanoni@intel.com>
> > > Signed-off-by: Matthew Brost <matthew.brost@intel.com>
> > > ---
> > > drivers/gpu/drm/xe/xe_vm.c | 12 ++++++------
> > > 1 file changed, 6 insertions(+), 6 deletions(-)
> > >
> > > diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
> > > index 3fde2c8292ad..b715883f40d8 100644
> > > --- a/drivers/gpu/drm/xe/xe_vm.c
> > > +++ b/drivers/gpu/drm/xe/xe_vm.c
> > > @@ -718,7 +718,7 @@ int xe_vm_userptr_check_repin(struct xe_vm *vm)
> > > list_empty_careful(&vm->userptr.invalidated)) ? 0 : -EAGAIN;
> > > }
> > > -static int xe_vma_ops_alloc(struct xe_vma_ops *vops)
> > > +static int xe_vma_ops_alloc(struct xe_vma_ops *vops, bool array_of_binds)
> > > {
> > > int i;
> > > @@ -731,7 +731,7 @@ static int xe_vma_ops_alloc(struct xe_vma_ops *vops)
> > > sizeof(*vops->pt_update_ops[i].ops),
> > > GFP_KERNEL);
> > > if (!vops->pt_update_ops[i].ops)
> > > - return -ENOMEM;
> > > + return array_of_binds ? -ENOBUFS : -ENOMEM;
> > > }
> > > return 0;
> > > @@ -824,7 +824,7 @@ int xe_vm_rebind(struct xe_vm *vm, bool rebind_worker)
> > > goto free_ops;
> > > }
> > > - err = xe_vma_ops_alloc(&vops);
> > > + err = xe_vma_ops_alloc(&vops, false);
> > > if (err)
> > > goto free_ops;
> > > @@ -871,7 +871,7 @@ struct dma_fence *xe_vma_rebind(struct xe_vm *vm, struct xe_vma *vma, u8 tile_ma
> > > if (err)
> > > return ERR_PTR(err);
> > > - err = xe_vma_ops_alloc(&vops);
> > > + err = xe_vma_ops_alloc(&vops, false);
> > > if (err) {
> > > fence = ERR_PTR(err);
> > > goto free_ops;
> > > @@ -2765,7 +2765,7 @@ static int vm_bind_ioctl_check_args(struct xe_device *xe,
> > > sizeof(struct drm_xe_vm_bind_op),
> > > GFP_KERNEL | __GFP_ACCOUNT);
> > > if (!*bind_ops)
> > > - return -ENOMEM;
> > > + return args->num_binds > 1 ? -ENOBUFS : -ENOMEM;
> > > err = __copy_from_user(*bind_ops, bind_user,
> > > sizeof(struct drm_xe_vm_bind_op) *
> > > @@ -3104,7 +3104,7 @@ int xe_vm_bind_ioctl(struct drm_device *dev, void *data, struct drm_file *file)
> > > goto unwind_ops;
> > > }
> > > - err = xe_vma_ops_alloc(&vops);
> > > + err = xe_vma_ops_alloc(&vops, args->num_binds > 1);
> > > if (err)
> > > goto unwind_ops;
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH] drm/xe: Return -ENOBUFS if a kmalloc fails which is tied to an array of binds
2024-07-20 23:20 ` Matthew Brost
@ 2024-07-22 3:53 ` Ghimiray, Himal Prasad
0 siblings, 0 replies; 9+ messages in thread
From: Ghimiray, Himal Prasad @ 2024-07-22 3:53 UTC (permalink / raw)
To: Matthew Brost; +Cc: intel-xe, paulo.r.zanoni
On 21-07-2024 04:50, Matthew Brost wrote:
> On Sat, Jul 20, 2024 at 11:14:36PM +0000, Matthew Brost wrote:
>> On Sun, Jul 21, 2024 at 12:34:27AM +0530, Ghimiray, Himal Prasad wrote:
>>>
>>>
>>> On 19-07-2024 22:53, Matthew Brost wrote:
>>>> The size of an array of binds is directly tied to several kmalloc in the
>>>> KMD, thus making these kmalloc more likely to fail. Return -ENOBUFS in
>>>> the case of these failures.
>>>>
>>>> The expected UMD behavior upon returning -ENOBUFS is to split an array
>>>> of binds into a series of single binds.
>>>
>>> Would it be appropriate to have some doc/guidelines in the form of drm_err
>>> or kernel doc regarding expected behavior from UMD if the ioctl returns a
>>> -ENOBUFS error ?
>>>
>>
>> Yes, this on the todo list as part of error handling cleanup for both
>> exec and bind IOCTLs. I think kernel doc should go in xe_drm.h with a
>> list of errno returned and expected UMD actions. Eventually I'd like to
>> get this in place for all IOCTLs. I was going to work on getting exec
>> and bind IOCTLs fixed up in the next couple of weeks (we have an
>> internal doc of required changes) to have it ready for when Thomas is
>> back (2 more weeks).
Sure, that sounds great!
>>
>> I made this change as we already have -ENOBUFS implemented in a
>> different failure point for array of binds (BB being to large, see
>> xe_migrate.c) so might as well just finish up this error code to make it
>
> Also see xe_sa.c, search for -ENOBUFS.
>
>> complete as it is fairly simple change. Also Mesa has a MR [1] inflight
>> to handle -ENOBUFs situations.
Makes sense.
Reviewed-by: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
>>
>> Matt
>>
>> [1] https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/30276
>>
>>>>
>>>> Cc: Paulo Zanoni <paulo.r.zanoni@intel.com>
>>>> Signed-off-by: Matthew Brost <matthew.brost@intel.com>
>>>> ---
>>>> drivers/gpu/drm/xe/xe_vm.c | 12 ++++++------
>>>> 1 file changed, 6 insertions(+), 6 deletions(-)
>>>>
>>>> diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
>>>> index 3fde2c8292ad..b715883f40d8 100644
>>>> --- a/drivers/gpu/drm/xe/xe_vm.c
>>>> +++ b/drivers/gpu/drm/xe/xe_vm.c
>>>> @@ -718,7 +718,7 @@ int xe_vm_userptr_check_repin(struct xe_vm *vm)
>>>> list_empty_careful(&vm->userptr.invalidated)) ? 0 : -EAGAIN;
>>>> }
>>>> -static int xe_vma_ops_alloc(struct xe_vma_ops *vops)
>>>> +static int xe_vma_ops_alloc(struct xe_vma_ops *vops, bool array_of_binds)
>>>> {
>>>> int i;
>>>> @@ -731,7 +731,7 @@ static int xe_vma_ops_alloc(struct xe_vma_ops *vops)
>>>> sizeof(*vops->pt_update_ops[i].ops),
>>>> GFP_KERNEL);
>>>> if (!vops->pt_update_ops[i].ops)
>>>> - return -ENOMEM;
>>>> + return array_of_binds ? -ENOBUFS : -ENOMEM;
>>>> }
>>>> return 0;
>>>> @@ -824,7 +824,7 @@ int xe_vm_rebind(struct xe_vm *vm, bool rebind_worker)
>>>> goto free_ops;
>>>> }
>>>> - err = xe_vma_ops_alloc(&vops);
>>>> + err = xe_vma_ops_alloc(&vops, false);
>>>> if (err)
>>>> goto free_ops;
>>>> @@ -871,7 +871,7 @@ struct dma_fence *xe_vma_rebind(struct xe_vm *vm, struct xe_vma *vma, u8 tile_ma
>>>> if (err)
>>>> return ERR_PTR(err);
>>>> - err = xe_vma_ops_alloc(&vops);
>>>> + err = xe_vma_ops_alloc(&vops, false);
>>>> if (err) {
>>>> fence = ERR_PTR(err);
>>>> goto free_ops;
>>>> @@ -2765,7 +2765,7 @@ static int vm_bind_ioctl_check_args(struct xe_device *xe,
>>>> sizeof(struct drm_xe_vm_bind_op),
>>>> GFP_KERNEL | __GFP_ACCOUNT);
>>>> if (!*bind_ops)
>>>> - return -ENOMEM;
>>>> + return args->num_binds > 1 ? -ENOBUFS : -ENOMEM;
>>>> err = __copy_from_user(*bind_ops, bind_user,
>>>> sizeof(struct drm_xe_vm_bind_op) *
>>>> @@ -3104,7 +3104,7 @@ int xe_vm_bind_ioctl(struct drm_device *dev, void *data, struct drm_file *file)
>>>> goto unwind_ops;
>>>> }
>>>> - err = xe_vma_ops_alloc(&vops);
>>>> + err = xe_vma_ops_alloc(&vops, args->num_binds > 1);
>>>> if (err)
>>>> goto unwind_ops;
^ permalink raw reply [flat|nested] 9+ messages in thread
end of thread, other threads:[~2024-07-22 3:54 UTC | newest]
Thread overview: 9+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-07-19 17:23 [PATCH] drm/xe: Return -ENOBUFS if a kmalloc fails which is tied to an array of binds Matthew Brost
2024-07-19 17:27 ` ✓ CI.Patch_applied: success for " Patchwork
2024-07-19 17:28 ` ✓ CI.checkpatch: " Patchwork
2024-07-19 17:28 ` ✗ CI.KUnit: failure " Patchwork
2024-07-19 18:27 ` [PATCH] " Cavitt, Jonathan
2024-07-20 19:04 ` Ghimiray, Himal Prasad
2024-07-20 23:14 ` Matthew Brost
2024-07-20 23:20 ` Matthew Brost
2024-07-22 3:53 ` Ghimiray, Himal Prasad
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox