From: "Thomas Hellström" <thomas.hellstrom@linux.intel.com>
To: Matthew Brost <matthew.brost@intel.com>, intel-xe@lists.freedesktop.org
Subject: Re: [PATCH v2 3/3] drm/xe: Get page on user fence creation
Date: Fri, 01 Mar 2024 07:36:01 +0100 [thread overview]
Message-ID: <6235b4c6c6d537d928795cd2dde24c27ce77ecce.camel@linux.intel.com> (raw)
In-Reply-To: <20240301035522.238307-4-matthew.brost@intel.com>
On Thu, 2024-02-29 at 19:55 -0800, Matthew Brost wrote:
> Attempt to get page on user fence creation and kmap_local_page on
> signaling. Should reduce latency and can ensure 64 bit atomicity
> compared to copy_to_user.
>
> v2:
> - Prefault page and drop ref (Thomas)
> - Use set_page_dirty_lock (Thomas)
> - try_cmpxchg64 loop (Thomas)
>
> Signed-off-by: Matthew Brost <matthew.brost@intel.com>
> ---
> drivers/gpu/drm/xe/xe_sync.c | 52 +++++++++++++++++++++++++++++++---
> --
> 1 file changed, 45 insertions(+), 7 deletions(-)
>
> diff --git a/drivers/gpu/drm/xe/xe_sync.c
> b/drivers/gpu/drm/xe/xe_sync.c
> index c20e1f9ad267..bf7f22519cc5 100644
> --- a/drivers/gpu/drm/xe/xe_sync.c
> +++ b/drivers/gpu/drm/xe/xe_sync.c
> @@ -6,6 +6,7 @@
> #include "xe_sync.h"
>
> #include <linux/dma-fence-array.h>
> +#include <linux/highmem.h>
> #include <linux/kthread.h>
> #include <linux/sched/mm.h>
> #include <linux/uaccess.h>
> @@ -28,6 +29,7 @@ struct xe_user_fence {
> u64 __user *addr;
> u64 value;
> int signalled;
> + bool use_page;
> };
>
> static void user_fence_destroy(struct kref *kref)
> @@ -53,7 +55,9 @@ static struct xe_user_fence
> *user_fence_create(struct xe_device *xe, u64 addr,
> u64 value)
> {
> struct xe_user_fence *ufence;
> + struct page *page;
> u64 __user *ptr = u64_to_user_ptr(addr);
> + int ret;
>
> if (!access_ok(ptr, sizeof(ptr)))
> return ERR_PTR(-EFAULT);
> @@ -69,19 +73,53 @@ static struct xe_user_fence
> *user_fence_create(struct xe_device *xe, u64 addr,
> ufence->mm = current->mm;
> mmgrab(ufence->mm);
>
> + /* Prefault page */
> + ret = get_user_pages_fast(addr, 1, FOLL_WRITE, &page);
> + if (ret == 1) {
> + ufence->use_page = true;
> + put_page(page);
> + }
> +
> return ufence;
> }
>
> static void user_fence_worker(struct work_struct *w)
> {
> struct xe_user_fence *ufence = container_of(w, struct
> xe_user_fence, worker);
> -
> - if (mmget_not_zero(ufence->mm)) {
> - kthread_use_mm(ufence->mm);
> - if (copy_to_user(ufence->addr, &ufence->value,
> sizeof(ufence->value)))
> - XE_WARN_ON("Copy to user failed");
> - kthread_unuse_mm(ufence->mm);
> - mmput(ufence->mm);
> + struct mm_struct *mm = ufence->mm;
> +
> + if (mmget_not_zero(mm)) {
> + kthread_use_mm(mm);
> + if (ufence->use_page) {
> + struct page *page;
> + int ret;
> +
> + ret = get_user_pages_fast((unsigned
> long)ufence->addr,
> + 1, FOLL_WRITE,
> &page);
> + if (ret == 1) {
> + u64 *ptr;
> + u64 old = 0;
> + void *va;
> +
> + va = kmap_local_page(page);
> + ptr = va + offset_in_page(ufence-
> >addr);
> + while (!try_cmpxchg64(ptr, &old,
> ufence->value))
> + continue;
> + kunmap_local(va);
> +
> + set_page_dirty_lock(page);
> + put_page(page);
> + } else {
> + ufence->use_page = false;
> + }
> + }
> + if (!ufence->use_page) {
Hmm. Trying to figure out the semantics here. If ever used on 32-bit,
and get_user_pages() fails, then I figure we can't guarantee atomicity.
That would typically be if the user-fence is in buffer-object or device
memory?
> + if (copy_to_user(ufence->addr, &ufence-
> >value,
> + sizeof(ufence->value)))
We should probably use put_user() here. On 64-bit I think that always
translates to an atomic write. And we should IMO precede with an mb()
to avoid in-kernel reordering. That would typically need to pair with
an mb() in the reader as well.
> + drm_warn(&ufence->xe->drm, "Copy to
> user failed\n");
> + }
> + kthread_unuse_mm(mm);
> + mmput(mm);
> }
>
> wake_up_all(&ufence->xe->ufence_wq);
next prev parent reply other threads:[~2024-03-01 6:36 UTC|newest]
Thread overview: 19+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-03-01 3:55 [PATCH v2 0/3] xe_sync and ufence rework Matthew Brost
2024-03-01 3:55 ` [PATCH v2 1/3] drm/xe: Remove used xe_sync_entry_wait Matthew Brost
2024-03-01 3:55 ` [PATCH v2 2/3] drm/xe: Validate user fence during creation Matthew Brost
2024-03-01 6:55 ` Thomas Hellström
2024-03-01 7:58 ` Matthew Brost
2024-03-01 8:22 ` Thomas Hellström
2024-03-01 3:55 ` [PATCH v2 3/3] drm/xe: Get page on user fence creation Matthew Brost
2024-03-01 6:36 ` Thomas Hellström [this message]
2024-03-01 7:46 ` Matthew Brost
2024-03-01 8:56 ` Thomas Hellström
2024-03-01 13:31 ` Thomas Hellström
2024-03-01 22:43 ` Matthew Brost
2024-03-01 3:59 ` ✓ CI.Patch_applied: success for xe_sync and ufence rework (rev2) Patchwork
2024-03-01 4:00 ` ✓ CI.checkpatch: " Patchwork
2024-03-01 4:00 ` ✓ CI.KUnit: " Patchwork
2024-03-01 4:11 ` ✓ CI.Build: " Patchwork
2024-03-01 4:12 ` ✓ CI.Hooks: " Patchwork
2024-03-01 4:13 ` ✓ CI.checksparse: " Patchwork
2024-03-01 4:39 ` ✗ CI.BAT: failure " Patchwork
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=6235b4c6c6d537d928795cd2dde24c27ce77ecce.camel@linux.intel.com \
--to=thomas.hellstrom@linux.intel.com \
--cc=intel-xe@lists.freedesktop.org \
--cc=matthew.brost@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox