From: Matthew Brost <matthew.brost@intel.com>
To: "Thomas Hellström" <thomas.hellstrom@linux.intel.com>
Cc: dri-devel@lists.freedesktop.org,
Joonas Lahtinen <joonas.lahtinen@linux.intel.com>,
intel-xe@lists.freedesktop.org, Daniel Vetter <daniel@ffwll.ch>
Subject: Re: [Intel-xe] [PATCH 3/4] drm/xe/vm: Perform accounting of userptr pinned pages
Date: Sun, 20 Aug 2023 03:43:29 +0000 [thread overview]
Message-ID: <ZOGL4bY8NvFrDP6O@DUT025-TGLU.fm.intel.com> (raw)
In-Reply-To: <20230818150845.96679-4-thomas.hellstrom@linux.intel.com>
On Fri, Aug 18, 2023 at 05:08:44PM +0200, Thomas Hellström wrote:
> Account these pages against RLIMIT_MEMLOCK following how RDMA does this
> with CAP_IPC_LOCK bypassing the limit.
>
> Signed-off-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
Patch LGTM but nits on naming + possible assert.
> ---
> drivers/gpu/drm/xe/xe_vm.c | 43 ++++++++++++++++++++++++++++++++++++--
> 1 file changed, 41 insertions(+), 2 deletions(-)
>
> diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
> index ecbcad696b60..d9c000689002 100644
> --- a/drivers/gpu/drm/xe/xe_vm.c
> +++ b/drivers/gpu/drm/xe/xe_vm.c
> @@ -34,6 +34,33 @@
>
> #define TEST_VM_ASYNC_OPS_ERROR
>
> +/*
> + * Perform userptr PIN accounting against RLIMIT_MEMLOCK for now, similarly
> + * to how RDMA does this.
> + */
> +static int xe_vma_mlock_alloc(struct xe_vma *vma, unsigned long num_pages)
> +{
xe_vma_userptr_mlock_alloc? or maybe even xe_vma_userptr_mlock_reserve?
> + unsigned long lock_limit, new_pinned;
> + struct mm_struct *mm = vma->userptr.notifier.mm;
> +
This be a candidate to use the new aseert macros to ensure that the vma
is a userptr + pinned? Not sure if that merged yet.
> + if (!can_do_mlock())
> + return -EPERM;
> +
> + lock_limit = rlimit(RLIMIT_MEMLOCK) >> PAGE_SHIFT;
> + new_pinned = atomic64_add_return(num_pages, &mm->pinned_vm);
> + if (new_pinned > lock_limit && !capable(CAP_IPC_LOCK)) {
> + atomic64_sub(num_pages, &mm->pinned_vm);
> + return -ENOMEM;
> + }
> +
> + return 0;
> +}
> +
> +static void xe_vma_mlock_free(struct xe_vma *vma, unsigned long num_pages)
> +{
xe_vma_userptr_mlock_free? or maybe even xe_vma_userptr_mlock_release?
Same for the assert here.
Anyways, I'll leave addressing these nits up to you, with that:
Reviewed-by: Matthew Brost <matthew.brost@intel.com>
> + atomic64_sub(num_pages, &vma->userptr.notifier.mm->pinned_vm);
> +}
> +
> /**
> * xe_vma_userptr_check_repin() - Advisory check for repin needed
> * @vma: The userptr vma
> @@ -89,9 +116,17 @@ int xe_vma_userptr_pin_pages(struct xe_vma *vma)
> !read_only);
> pages = vma->userptr.pinned_pages;
> } else {
> + if (xe_vma_is_pinned(vma)) {
> + ret = xe_vma_mlock_alloc(vma, num_pages);
> + if (ret)
> + return ret;
> + }
> +
> pages = kvmalloc_array(num_pages, sizeof(*pages), GFP_KERNEL);
> - if (!pages)
> - return -ENOMEM;
> + if (!pages) {
> + ret = -ENOMEM;
> + goto out_account;
> + }
> }
>
> pinned = ret = 0;
> @@ -187,6 +222,9 @@ int xe_vma_userptr_pin_pages(struct xe_vma *vma)
> mm_closed:
> kvfree(pages);
> vma->userptr.pinned_pages = NULL;
> +out_account:
> + if (xe_vma_is_pinned(vma))
> + xe_vma_mlock_free(vma, num_pages);
> return ret;
> }
>
> @@ -1004,6 +1042,7 @@ static void xe_vma_destroy_late(struct xe_vma *vma)
> unpin_user_pages_dirty_lock(vma->userptr.pinned_pages,
> vma->userptr.num_pinned,
> !read_only);
> + xe_vma_mlock_free(vma, xe_vma_size(vma) >> PAGE_SHIFT);
> kvfree(vma->userptr.pinned_pages);
> }
>
> --
> 2.41.0
>
next prev parent reply other threads:[~2023-08-20 3:44 UTC|newest]
Thread overview: 17+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-08-18 15:08 [Intel-xe] [PATCH 0/4] drm/xe: Support optional pinning of userptr pages Thomas Hellström
2023-08-18 15:08 ` [Intel-xe] [PATCH 1/4] drm/xe/vm: Use onion unwind for xe_vma_userptr_pin_pages() Thomas Hellström
2023-08-18 18:15 ` Matthew Brost
2023-08-18 15:08 ` [Intel-xe] [PATCH 2/4] drm/xe/vm: Implement userptr page pinning Thomas Hellström
2023-08-20 4:06 ` Matthew Brost
2023-08-22 8:23 ` Thomas Hellström
2023-08-18 15:08 ` [Intel-xe] [PATCH 3/4] drm/xe/vm: Perform accounting of userptr pinned pages Thomas Hellström
2023-08-20 3:43 ` Matthew Brost [this message]
2023-08-22 8:10 ` Thomas Hellström
2023-08-18 15:08 ` [Intel-xe] [PATCH 4/4] drm/xe/uapi: Support pinning of userptr vmas Thomas Hellström
2023-08-20 3:54 ` Matthew Brost
2023-08-22 8:25 ` Thomas Hellström
2023-08-18 15:12 ` [Intel-xe] ✓ CI.Patch_applied: success for drm/xe: Support optional pinning of userptr pages Patchwork
2023-08-18 15:12 ` [Intel-xe] ✗ CI.checkpatch: warning " Patchwork
2023-08-18 15:14 ` [Intel-xe] ✓ CI.KUnit: success " Patchwork
2023-08-18 15:17 ` [Intel-xe] ✓ CI.Build: " Patchwork
2023-08-18 15:18 ` [Intel-xe] ✗ CI.Hooks: failure " Patchwork
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=ZOGL4bY8NvFrDP6O@DUT025-TGLU.fm.intel.com \
--to=matthew.brost@intel.com \
--cc=daniel@ffwll.ch \
--cc=dri-devel@lists.freedesktop.org \
--cc=intel-xe@lists.freedesktop.org \
--cc=joonas.lahtinen@linux.intel.com \
--cc=thomas.hellstrom@linux.intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox