Intel-XE Archive on lore.kernel.org
 help / color / mirror / Atom feed
From: "Thomas Hellström" <thomas.hellstrom@linux.intel.com>
To: Matthew Brost <matthew.brost@intel.com>
Cc: intel-xe@lists.freedesktop.org
Subject: Re: [PATCH v2 3/3] drm/xe: Get page on user fence creation
Date: Fri, 01 Mar 2024 09:56:57 +0100	[thread overview]
Message-ID: <e278d2f506078a50d072f132cd51835391f3b3a9.camel@linux.intel.com> (raw)
In-Reply-To: <ZeGHwQZ7tLj9oAv8@DUT025-TGLU.fm.intel.com>

On Fri, 2024-03-01 at 07:46 +0000, Matthew Brost wrote:
> On Fri, Mar 01, 2024 at 07:36:01AM +0100, Thomas Hellström wrote:
> > On Thu, 2024-02-29 at 19:55 -0800, Matthew Brost wrote:
> > > Attempt to get page on user fence creation and kmap_local_page on
> > > signaling. Should reduce latency and can ensure 64 bit atomicity
> > > compared to copy_to_user.
> > > 
> > > v2:
> > >  - Prefault page and drop ref (Thomas)
> > >  - Use set_page_dirty_lock (Thomas)
> > >  - try_cmpxchg64 loop (Thomas)
> > > 
> > > Signed-off-by: Matthew Brost <matthew.brost@intel.com>
> > > ---
> > >  drivers/gpu/drm/xe/xe_sync.c | 52
> > > +++++++++++++++++++++++++++++++---
> > > --
> > >  1 file changed, 45 insertions(+), 7 deletions(-)
> > > 
> > > diff --git a/drivers/gpu/drm/xe/xe_sync.c
> > > b/drivers/gpu/drm/xe/xe_sync.c
> > > index c20e1f9ad267..bf7f22519cc5 100644
> > > --- a/drivers/gpu/drm/xe/xe_sync.c
> > > +++ b/drivers/gpu/drm/xe/xe_sync.c
> > > @@ -6,6 +6,7 @@
> > >  #include "xe_sync.h"
> > >  
> > >  #include <linux/dma-fence-array.h>
> > > +#include <linux/highmem.h>
> > >  #include <linux/kthread.h>
> > >  #include <linux/sched/mm.h>
> > >  #include <linux/uaccess.h>
> > > @@ -28,6 +29,7 @@ struct xe_user_fence {
> > >  	u64 __user *addr;
> > >  	u64 value;
> > >  	int signalled;
> > > +	bool use_page;
> > >  };
> > >  
> > >  static void user_fence_destroy(struct kref *kref)
> > > @@ -53,7 +55,9 @@ static struct xe_user_fence
> > > *user_fence_create(struct xe_device *xe, u64 addr,
> > >  					       u64 value)
> > >  {
> > >  	struct xe_user_fence *ufence;
> > > +	struct page *page;
> > >  	u64 __user *ptr = u64_to_user_ptr(addr);
> > > +	int ret;
> > >  
> > >  	if (!access_ok(ptr, sizeof(ptr)))
> > >  		return ERR_PTR(-EFAULT);
> > > @@ -69,19 +73,53 @@ static struct xe_user_fence
> > > *user_fence_create(struct xe_device *xe, u64 addr,
> > >  	ufence->mm = current->mm;
> > >  	mmgrab(ufence->mm);
> > >  
> > > +	/* Prefault page */
> > > +	ret = get_user_pages_fast(addr, 1, FOLL_WRITE, &page);
> > > +	if (ret == 1) {
> > > +		ufence->use_page = true;
> > > +		put_page(page);
> > > +	}
> > > +
> > >  	return ufence;
> > >  }
> > >  
> > >  static void user_fence_worker(struct work_struct *w)
> > >  {
> > >  	struct xe_user_fence *ufence = container_of(w, struct
> > > xe_user_fence, worker);
> > > -
> > > -	if (mmget_not_zero(ufence->mm)) {
> > > -		kthread_use_mm(ufence->mm);
> > > -		if (copy_to_user(ufence->addr, &ufence->value,
> > > sizeof(ufence->value)))
> > > -			XE_WARN_ON("Copy to user failed");
> > > -		kthread_unuse_mm(ufence->mm);
> > > -		mmput(ufence->mm);
> > > +	struct mm_struct *mm = ufence->mm;
> > > +
> > > +	if (mmget_not_zero(mm)) {
> > > +		kthread_use_mm(mm);
> > > +		if (ufence->use_page) {
> > > +			struct page *page;
> > > +			int ret;
> > > +
> > > +			ret = get_user_pages_fast((unsigned
> > > long)ufence->addr,
> > > +						  1, FOLL_WRITE,
> > > &page);
> > > +			if (ret == 1) {
> > > +				atomic64_t *ptr;
> > > +				u64 old = 0;
> > > +				void *va;
> > > +
> > > +				va = kmap_local_page(page);
> > > +				ptr = va +
> > > offset_in_page(ufence-
> > > > addr);
> > > +				while (!try_cmpxchg64(ptr, &old,
> > > ufence->value))
> > > +					continue;

I'm still a little worried about the availability of this, like when
the build-bot tests on all available architectures, and Linus has
already pulled the stuff. It's definitely there on i386, and it seems
to be used generically in sched/clock.c. Might be worth-wile to CC dri-
devel/lkml and have the build bots pick it up...


> > > +				kunmap_local(va);
> > > +
> > > +				set_page_dirty_lock(page);
> > > +				put_page(page);
> > > +			} else {
> > > +				ufence->use_page = false;
> > > +			}
> > > +		}
> > > +		if (!ufence->use_page) {
> > 
> > Hmm. Trying to figure out the semantics here. If ever used on 32-
> > bit,
> > and get_user_pages() fails, then I figure we can't guarantee
> > atomicity.
> > That would typically be if the user-fence is in buffer-object or
> > device
> > memory?
> > 
> 
> I think so, based on [1] if the ufence is a mapped BO on TGL
> get_user_pages_fast doesn't work and !use_page path is used. Hence I
> add
> malloc ufence section in [1].
> 
> [1]
> https://patchwork.freedesktop.org/patch/580147/?series=130417&rev=1
> 
> > > +			if (copy_to_user(ufence->addr, &ufence-
> > > > value,
> > > +					 sizeof(ufence->value)))
> > 
> > We should probably use put_user() here. On 64-bit I think that
> > always
> > translates to an atomic write. And we should IMO precede with an
> > mb()
> > to avoid in-kernel reordering. That would typically need to pair
> > with
> > an mb() in the reader as well.
> > 
> 
> Got on on the put_user(), seems to work.
> 
> A little unclear on mb() usage.
> 
> Would it be?
> mb()
> put_user()

Yes, this is correct. Actually we'd want smp_store_release() semantics
here, but this is stricter.


> 
> And then in xe_wait_user_fence.c:do_compare?
> mb()
> copy_from_user

Here we'd want smp_read_acquire() but we'd have to do with the below.
get_user()
mb();

And user-space should use a similar mb() as well if they need to be
sure things are indeed done after the signalling.

/Thomas

> 
> Matt
> 
> > 
> > > +				drm_warn(&ufence->xe->drm, "Copy
> > > to
> > > user failed\n");
> > > +		}
> > > +		kthread_unuse_mm(mm);
> > > +		mmput(mm);
> > >  	}
> > >  
> > >  	wake_up_all(&ufence->xe->ufence_wq);
> > 


  reply	other threads:[~2024-03-01  8:57 UTC|newest]

Thread overview: 19+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-03-01  3:55 [PATCH v2 0/3] xe_sync and ufence rework Matthew Brost
2024-03-01  3:55 ` [PATCH v2 1/3] drm/xe: Remove used xe_sync_entry_wait Matthew Brost
2024-03-01  3:55 ` [PATCH v2 2/3] drm/xe: Validate user fence during creation Matthew Brost
2024-03-01  6:55   ` Thomas Hellström
2024-03-01  7:58     ` Matthew Brost
2024-03-01  8:22       ` Thomas Hellström
2024-03-01  3:55 ` [PATCH v2 3/3] drm/xe: Get page on user fence creation Matthew Brost
2024-03-01  6:36   ` Thomas Hellström
2024-03-01  7:46     ` Matthew Brost
2024-03-01  8:56       ` Thomas Hellström [this message]
2024-03-01 13:31         ` Thomas Hellström
2024-03-01 22:43         ` Matthew Brost
2024-03-01  3:59 ` ✓ CI.Patch_applied: success for xe_sync and ufence rework (rev2) Patchwork
2024-03-01  4:00 ` ✓ CI.checkpatch: " Patchwork
2024-03-01  4:00 ` ✓ CI.KUnit: " Patchwork
2024-03-01  4:11 ` ✓ CI.Build: " Patchwork
2024-03-01  4:12 ` ✓ CI.Hooks: " Patchwork
2024-03-01  4:13 ` ✓ CI.checksparse: " Patchwork
2024-03-01  4:39 ` ✗ CI.BAT: failure " Patchwork

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=e278d2f506078a50d072f132cd51835391f3b3a9.camel@linux.intel.com \
    --to=thomas.hellstrom@linux.intel.com \
    --cc=intel-xe@lists.freedesktop.org \
    --cc=matthew.brost@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox