Intel-XE Archive on lore.kernel.org
 help / color / mirror / Atom feed
From: "Thomas Hellström" <thomas.hellstrom@linux.intel.com>
To: Matthew Auld <matthew.auld@intel.com>, intel-xe@lists.freedesktop.org
Cc: Oak Zeng <oak.zeng@intel.com>, stable@vger.kernel.org
Subject: Re: [PATCH 2/3] drm/xe/hmm: Don't dereference struct page pointers without notifier lock
Date: Fri, 28 Feb 2025 14:08:09 +0100	[thread overview]
Message-ID: <db2a5741b3e43cc3a54fa9c01e6af55677275070.camel@linux.intel.com> (raw)
In-Reply-To: <b62352fd-f16a-423c-8a20-5d7697a6c4a7@intel.com>

On Fri, 2025-02-28 at 12:55 +0000, Matthew Auld wrote:
> On 28/02/2025 10:44, Thomas Hellström wrote:
> > The pnfs that we obtain from hmm_range_fault() point to pages that
> > we don't have a reference on, and the guarantee that they are still
> > in the cpu page-tables is that the notifier lock must be held and
> > the
> > notifier seqno is still valid.
> > 
> > So while building the sg table and marking the pages accesses /
> > dirty
> > we need to hold this lock with a validated seqno.
> > 
> > However, the lock is reclaim tainted which makes
> > sg_alloc_table_from_pages_segment() unusable, since it internally
> > allocates memory.
> > 
> > Instead build the sg-table manually. For the non-iommu case
> > this might lead to fewer coalesces, but if that's a problem it can
> > be fixed up later in the resource cursor code. For the iommu case,
> > the whole sg-table may still be coalesced to a single contigous
> > device va region.
> > 
> > This avoids marking pages that we don't own dirty and accessed, and
> > it also avoid dereferencing struct pages that we don't own.
> > 
> > Fixes: 81e058a3e7fd ("drm/xe: Introduce helper to populate
> > userptr")
> > Cc: Oak Zeng <oak.zeng@intel.com>
> > Cc: <stable@vger.kernel.org> # v6.10+
> > Signed-off-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
> > ---
> >   drivers/gpu/drm/xe/xe_hmm.c | 115 ++++++++++++++++++++++++++-----
> > -----
> >   1 file changed, 85 insertions(+), 30 deletions(-)
> > 
> > diff --git a/drivers/gpu/drm/xe/xe_hmm.c
> > b/drivers/gpu/drm/xe/xe_hmm.c
> > index c56738fa713b..d3b5551496d0 100644
> > --- a/drivers/gpu/drm/xe/xe_hmm.c
> > +++ b/drivers/gpu/drm/xe/xe_hmm.c
> > @@ -42,6 +42,36 @@ static void xe_mark_range_accessed(struct
> > hmm_range *range, bool write)
> >   	}
> >   }
> >   
> > +static int xe_alloc_sg(struct sg_table *st, struct hmm_range
> > *range,
> > +		       struct rw_semaphore *notifier_sem)
> > +{
> > +	unsigned long i, npages, hmm_pfn;
> > +	unsigned long num_chunks = 0;
> > +	int ret;
> > +
> > +	/* HMM docs says this is needed. */
> > +	ret = down_read_interruptible(notifier_sem);
> > +	if (ret)
> > +		return ret;
> > +
> > +	if (mmu_interval_read_retry(range->notifier, range-
> > >notifier_seq))
> > +		return -EAGAIN;
> > +
> > +	npages = xe_npages_in_range(range->start, range->end);
> > +	for (i = 0; i < npages;) {
> > +		hmm_pfn = range->hmm_pfns[i];
> > +		if (!(hmm_pfn & HMM_PFN_VALID)) {
> 
> Is this possible? The default_flags are always REQ_FAULT, so that
> should 
> ensure PFN_VALID, or the hmm_fault would have returned an error?
> 
>  From the docs:
> 
> "HMM_PFN_REQ_FAULT - The output must have HMM_PFN_VALID or 
> hmm_range_fault() will fail"
> 
> Should this be an assert?
> 
> Also probably dumb question, but why do we need to hold the notifier 
> lock over this loop? What is it protecting?

Docs for hmm_pfn_to_map_order():

/*
 * This must be called under the caller 'user_lock' after a successful
 * mmu_interval_read_begin(). The caller must have tested for
HMM_PFN_VALID
 * already.
 */

I'm fine with changing to an assert, and I agree that the lock is
pointless: We're operating on thread local data, but I also think that
not adhering to the doc requirements might cause problems in the
future. Like if the map order encoding is dropped and the order was
grabbed from the underlying page.

/Thomas


> 
> > +			up_read(notifier_sem);
> > +			return -EFAULT;
> > +		}
> > +		num_chunks++;
> > +		i += 1UL << hmm_pfn_to_map_order(hmm_pfn);
> > +	}
> > +	up_read(notifier_sem);
> > +
> > +	return sg_alloc_table(st, num_chunks, GFP_KERNEL);
> > +}
> > +
> >   /**
> >    * xe_build_sg() - build a scatter gather table for all the
> > physical pages/pfn
> >    * in a hmm_range. dma-map pages if necessary. dma-address is
> > save in sg table
> > @@ -50,6 +80,7 @@ static void xe_mark_range_accessed(struct
> > hmm_range *range, bool write)
> >    * @range: the hmm range that we build the sg table from. range-
> > >hmm_pfns[]
> >    * has the pfn numbers of pages that back up this hmm address
> > range.
> >    * @st: pointer to the sg table.
> > + * @notifier_sem: The xe notifier lock.
> >    * @write: whether we write to this range. This decides dma map
> > direction
> >    * for system pages. If write we map it bi-diretional; otherwise
> >    * DMA_TO_DEVICE
> > @@ -76,38 +107,33 @@ static void xe_mark_range_accessed(struct
> > hmm_range *range, bool write)
> >    * Returns 0 if successful; -ENOMEM if fails to allocate memory
> >    */
> >   static int xe_build_sg(struct xe_device *xe, struct hmm_range
> > *range,
> > -		       struct sg_table *st, bool write)
> > +		       struct sg_table *st,
> > +		       struct rw_semaphore *notifier_sem,
> > +		       bool write)
> >   {
> >   	struct device *dev = xe->drm.dev;
> > -	struct page **pages;
> > -	u64 i, npages;
> > -	int ret;
> > -
> > -	npages = xe_npages_in_range(range->start, range->end);
> > -	pages = kvmalloc_array(npages, sizeof(*pages),
> > GFP_KERNEL);
> > -	if (!pages)
> > -		return -ENOMEM;
> > -
> > -	for (i = 0; i < npages; i++) {
> > -		pages[i] = hmm_pfn_to_page(range->hmm_pfns[i]);
> > -		xe_assert(xe, !is_device_private_page(pages[i]));
> > -	}
> > -
> > -	ret = sg_alloc_table_from_pages_segment(st, pages, npages,
> > 0, npages << PAGE_SHIFT,
> > -
> > 						xe_sg_segment_size(dev), GFP_KERNEL);
> > -	if (ret)
> > -		goto free_pages;
> > -
> > -	ret = dma_map_sgtable(dev, st, write ? DMA_BIDIRECTIONAL :
> > DMA_TO_DEVICE,
> > -			      DMA_ATTR_SKIP_CPU_SYNC |
> > DMA_ATTR_NO_KERNEL_MAPPING);
> > -	if (ret) {
> > -		sg_free_table(st);
> > -		st = NULL;
> > +	unsigned long hmm_pfn, size;
> > +	struct scatterlist *sgl;
> > +	struct page *page;
> > +	unsigned long i, j;
> > +
> > +	lockdep_assert_held(notifier_sem);
> > +
> > +	i = 0;
> > +	for_each_sg(st->sgl, sgl, st->nents, j) {
> > +		hmm_pfn = range->hmm_pfns[i];
> > +		page = hmm_pfn_to_page(hmm_pfn);
> > +		xe_assert(xe, !is_device_private_page(page));
> > +		size = 1UL << hmm_pfn_to_map_order(hmm_pfn);
> > +		sg_set_page(sgl, page, size << PAGE_SHIFT, 0);
> > +		if (unlikely(j == st->nents - 1))
> > +			sg_mark_end(sgl);
> > +		i += size;
> >   	}
> > +	xe_assert(xe, i == xe_npages_in_range(range->start, range-
> > >end));
> >   
> > -free_pages:
> > -	kvfree(pages);
> > -	return ret;
> > +	return dma_map_sgtable(dev, st, write ? DMA_BIDIRECTIONAL
> > : DMA_TO_DEVICE,
> > +			       DMA_ATTR_SKIP_CPU_SYNC |
> > DMA_ATTR_NO_KERNEL_MAPPING);
> >   }
> >   
> >   /**
> > @@ -235,16 +261,45 @@ int xe_hmm_userptr_populate_range(struct
> > xe_userptr_vma *uvma,
> >   	if (ret)
> >   		goto free_pfns;
> >   
> > -	ret = xe_build_sg(vm->xe, &hmm_range, &userptr->sgt,
> > write);
> > +	if (unlikely(userptr->sg)) {
> > +		ret = down_write_killable(&vm-
> > >userptr.notifier_lock);
> > +		if (ret)
> > +			goto free_pfns;
> > +
> > +		xe_hmm_userptr_free_sg(uvma);
> > +		up_write(&vm->userptr.notifier_lock);
> > +	}
> > +
> > +	ret = xe_alloc_sg(&userptr->sgt, &hmm_range, &vm-
> > >userptr.notifier_lock);
> >   	if (ret)
> >   		goto free_pfns;
> >   
> > +	ret = down_read_interruptible(&vm->userptr.notifier_lock);
> > +	if (ret)
> > +		goto free_st;
> > +
> > +	if (mmu_interval_read_retry(hmm_range.notifier,
> > hmm_range.notifier_seq)) {
> > +		ret = -EAGAIN;
> > +		goto out_unlock;
> > +	}
> > +
> > +	ret = xe_build_sg(vm->xe, &hmm_range, &userptr->sgt,
> > +			  &vm->userptr.notifier_lock, write);
> > +	if (ret)
> > +		goto out_unlock;
> > +
> >   	xe_mark_range_accessed(&hmm_range, write);
> >   	userptr->sg = &userptr->sgt;
> >   	userptr->notifier_seq = hmm_range.notifier_seq;
> > +	up_read(&vm->userptr.notifier_lock);
> > +	kvfree(pfns);
> > +	return 0;
> >   
> > +out_unlock:
> > +	up_read(&vm->userptr.notifier_lock);
> > +free_st:
> > +	sg_free_table(&userptr->sgt);
> >   free_pfns:
> >   	kvfree(pfns);
> >   	return ret;
> >   }
> > -
> 


  reply	other threads:[~2025-02-28 13:08 UTC|newest]

Thread overview: 10+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-02-28 10:44 [PATCH 0/3] drm/xe: Userptr invalid page access fixes Thomas Hellström
2025-02-28 10:44 ` [PATCH 1/3] drm/xe/hmm: Style- and include fixes Thomas Hellström
2025-02-28 12:56   ` Matthew Auld
2025-02-28 10:44 ` [PATCH 2/3] drm/xe/hmm: Don't dereference struct page pointers without notifier lock Thomas Hellström
2025-02-28 12:55   ` Matthew Auld
2025-02-28 13:08     ` Thomas Hellström [this message]
2025-02-28 18:32       ` Matthew Auld
2025-03-04 11:28         ` Thomas Hellström
2025-02-28 10:44 ` [PATCH 3/3] drm/xe/userptr: Unmap userptrs in the mmu notifier Thomas Hellström
2025-02-28 12:08 ` ✗ CI.Patch_applied: failure for drm/xe: Userptr invalid page access fixes Patchwork

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=db2a5741b3e43cc3a54fa9c01e6af55677275070.camel@linux.intel.com \
    --to=thomas.hellstrom@linux.intel.com \
    --cc=intel-xe@lists.freedesktop.org \
    --cc=matthew.auld@intel.com \
    --cc=oak.zeng@intel.com \
    --cc=stable@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox