From: Matthew Brost <matthew.brost@intel.com>
To: "Thomas Hellström" <thomas.hellstrom@linux.intel.com>
Cc: <intel-xe@lists.freedesktop.org>
Subject: Re: [PATCH] drm/xe: Use vmalloc for array of bind allocation in bind IOCTL
Date: Mon, 26 Feb 2024 15:07:43 +0000 [thread overview]
Message-ID: <ZdypP3kKDiS6wQYA@DUT025-TGLU.fm.intel.com> (raw)
In-Reply-To: <5d3d54c8a37debbff3f597c1fad36be90637ae81.camel@linux.intel.com>
On Mon, Feb 26, 2024 at 10:18:06AM +0100, Thomas Hellström wrote:
> Hi, Matt
> On Fri, 2024-02-23 at 11:37 -0800, Matthew Brost wrote:
> > Use vmalloc in effort to allow a user pass in a large number of binds
> > in
> > an IOCTL (mesa use case). Also use array allocations rather open
> > coding
> > the size calculation.
>
> I still think allowing a large number of binds like this is a bit
> dangerous, because to avoid out-of-memory DOSes we will need to
> restrict the number of binds in some way. And if the UMDs can't
> gracefully handle the OOMS and retry with a smaller number of binds
> they will break. And we're not allowed to break the.....
>
See below, with [1] and this change we match Nouveau.
> Did we end up keeping a max number of binds that we guarantee for
> forseeable future?
>
Based on Paulo feedback I think we need to get [1] in 6.8 as without
that the array of binds is kinda useless. Also Nouveau has similar uAPI
and not such limits either and also uses vmalloc for these types of
allocations. With that I believe this patch should be in 6.8 too. If
this proves to be a problem we can fix this in future releases but if we
impose some limit I guess that can't really be changed as then it is
uAPI.
Matt
[1] https://patchwork.freedesktop.org/series/129923/
> /Thomas
>
>
> >
> > Fixes: dd08ebf6c352 ("drm/xe: Introduce a new DRM driver for Intel
> > GPUs")
> > Signed-off-by: Matthew Brost <matthew.brost@intel.com>
> > ---
> > drivers/gpu/drm/xe/xe_vm.c | 23 ++++++++++++-----------
> > 1 file changed, 12 insertions(+), 11 deletions(-)
> >
> > diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
> > index e3bde897f6e8..45a12207ebf5 100644
> > --- a/drivers/gpu/drm/xe/xe_vm.c
> > +++ b/drivers/gpu/drm/xe/xe_vm.c
> > @@ -2778,8 +2778,9 @@ static int vm_bind_ioctl_check_args(struct
> > xe_device *xe,
> > u64 __user *bind_user =
> > u64_to_user_ptr(args->vector_of_binds);
> >
> > - *bind_ops = kmalloc(sizeof(struct drm_xe_vm_bind_op)
> > *
> > - args->num_binds, GFP_KERNEL);
> > + *bind_ops = kvmalloc_array(args->num_binds,
> > + sizeof(struct
> > drm_xe_vm_bind_op),
> > + GFP_KERNEL);
> > if (!*bind_ops)
> > return -ENOMEM;
> >
> > @@ -2869,7 +2870,7 @@ static int vm_bind_ioctl_check_args(struct
> > xe_device *xe,
> >
> > free_bind_ops:
> > if (args->num_binds > 1)
> > - kfree(*bind_ops);
> > + kvfree(*bind_ops);
> > return err;
> > }
> >
> > @@ -2957,13 +2958,13 @@ int xe_vm_bind_ioctl(struct drm_device *dev,
> > void *data, struct drm_file *file)
> > }
> >
> > if (args->num_binds) {
> > - bos = kcalloc(args->num_binds, sizeof(*bos),
> > GFP_KERNEL);
> > + bos = kvcalloc(args->num_binds, sizeof(*bos),
> > GFP_KERNEL);
> > if (!bos) {
> > err = -ENOMEM;
> > goto release_vm_lock;
> > }
> >
> > - ops = kcalloc(args->num_binds, sizeof(*ops),
> > GFP_KERNEL);
> > + ops = kvcalloc(args->num_binds, sizeof(*ops),
> > GFP_KERNEL);
> > if (!ops) {
> > err = -ENOMEM;
> > goto release_vm_lock;
> > @@ -3104,10 +3105,10 @@ int xe_vm_bind_ioctl(struct drm_device *dev,
> > void *data, struct drm_file *file)
> > for (i = 0; bos && i < args->num_binds; ++i)
> > xe_bo_put(bos[i]);
> >
> > - kfree(bos);
> > - kfree(ops);
> > + kvfree(bos);
> > + kvfree(ops);
> > if (args->num_binds > 1)
> > - kfree(bind_ops);
> > + kvfree(bind_ops);
> >
> > return err;
> >
> > @@ -3131,10 +3132,10 @@ int xe_vm_bind_ioctl(struct drm_device *dev,
> > void *data, struct drm_file *file)
> > if (q)
> > xe_exec_queue_put(q);
> > free_objs:
> > - kfree(bos);
> > - kfree(ops);
> > + kvfree(bos);
> > + kvfree(ops);
> > if (args->num_binds > 1)
> > - kfree(bind_ops);
> > + kvfree(bind_ops);
> > return err;
> > }
> >
>
prev parent reply other threads:[~2024-02-26 15:08 UTC|newest]
Thread overview: 10+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-02-23 19:37 [PATCH] drm/xe: Use vmalloc for array of bind allocation in bind IOCTL Matthew Brost
2024-02-23 19:41 ` ✓ CI.Patch_applied: success for " Patchwork
2024-02-23 19:41 ` ✓ CI.checkpatch: " Patchwork
2024-02-23 19:42 ` ✓ CI.KUnit: " Patchwork
2024-02-23 19:53 ` ✓ CI.Build: " Patchwork
2024-02-23 19:54 ` ✓ CI.Hooks: " Patchwork
2024-02-23 19:55 ` ✓ CI.checksparse: " Patchwork
2024-02-23 20:14 ` ✓ CI.BAT: " Patchwork
2024-02-26 9:18 ` [PATCH] " Thomas Hellström
2024-02-26 15:07 ` Matthew Brost [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=ZdypP3kKDiS6wQYA@DUT025-TGLU.fm.intel.com \
--to=matthew.brost@intel.com \
--cc=intel-xe@lists.freedesktop.org \
--cc=thomas.hellstrom@linux.intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox