linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Ralph Campbell <rcampbell@nvidia.com>
To: Jason Gunthorpe <jgg@ziepe.ca>,
	Jerome Glisse <jglisse@redhat.com>,
	"John Hubbard" <jhubbard@nvidia.com>, <Felix.Kuehling@amd.com>
Cc: <linux-rdma@vger.kernel.org>, <linux-mm@kvack.org>,
	Andrea Arcangeli <aarcange@redhat.com>,
	<dri-devel@lists.freedesktop.org>,
	<amd-gfx@lists.freedesktop.org>,
	Jason Gunthorpe <jgg@mellanox.com>
Subject: Re: [PATCH v2 hmm 02/11] mm/hmm: Use hmm_mirror not mm as an argument for hmm_range_register
Date: Fri, 7 Jun 2019 11:24:44 -0700	[thread overview]
Message-ID: <4a391bd4-287c-5f13-3bca-c6a46ff8d08c@nvidia.com> (raw)
In-Reply-To: <20190606184438.31646-3-jgg@ziepe.ca>


On 6/6/19 11:44 AM, Jason Gunthorpe wrote:
> From: Jason Gunthorpe <jgg@mellanox.com>
> 
> Ralph observes that hmm_range_register() can only be called by a driver
> while a mirror is registered. Make this clear in the API by passing in the
> mirror structure as a parameter.
> 
> This also simplifies understanding the lifetime model for struct hmm, as
> the hmm pointer must be valid as part of a registered mirror so all we
> need in hmm_register_range() is a simple kref_get.
> 
> Suggested-by: Ralph Campbell <rcampbell@nvidia.com>
> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>

You might CC Ben for the nouveau part.
CC: Ben Skeggs <bskeggs@redhat.com>

Reviewed-by: Ralph Campbell <rcampbell@nvidia.com>


> ---
> v2
> - Include the oneline patch to nouveau_svm.c
> ---
>   drivers/gpu/drm/nouveau/nouveau_svm.c |  2 +-
>   include/linux/hmm.h                   |  7 ++++---
>   mm/hmm.c                              | 15 ++++++---------
>   3 files changed, 11 insertions(+), 13 deletions(-)
> 
> diff --git a/drivers/gpu/drm/nouveau/nouveau_svm.c b/drivers/gpu/drm/nouveau/nouveau_svm.c
> index 93ed43c413f0bb..8c92374afcf227 100644
> --- a/drivers/gpu/drm/nouveau/nouveau_svm.c
> +++ b/drivers/gpu/drm/nouveau/nouveau_svm.c
> @@ -649,7 +649,7 @@ nouveau_svm_fault(struct nvif_notify *notify)
>   		range.values = nouveau_svm_pfn_values;
>   		range.pfn_shift = NVIF_VMM_PFNMAP_V0_ADDR_SHIFT;
>   again:
> -		ret = hmm_vma_fault(&range, true);
> +		ret = hmm_vma_fault(&svmm->mirror, &range, true);
>   		if (ret == 0) {
>   			mutex_lock(&svmm->mutex);
>   			if (!hmm_vma_range_done(&range)) {
> diff --git a/include/linux/hmm.h b/include/linux/hmm.h
> index 688c5ca7068795..2d519797cb134a 100644
> --- a/include/linux/hmm.h
> +++ b/include/linux/hmm.h
> @@ -505,7 +505,7 @@ static inline bool hmm_mirror_mm_is_alive(struct hmm_mirror *mirror)
>    * Please see Documentation/vm/hmm.rst for how to use the range API.
>    */
>   int hmm_range_register(struct hmm_range *range,
> -		       struct mm_struct *mm,
> +		       struct hmm_mirror *mirror,
>   		       unsigned long start,
>   		       unsigned long end,
>   		       unsigned page_shift);
> @@ -541,7 +541,8 @@ static inline bool hmm_vma_range_done(struct hmm_range *range)
>   }
>   
>   /* This is a temporary helper to avoid merge conflict between trees. */
> -static inline int hmm_vma_fault(struct hmm_range *range, bool block)
> +static inline int hmm_vma_fault(struct hmm_mirror *mirror,
> +				struct hmm_range *range, bool block)
>   {
>   	long ret;
>   
> @@ -554,7 +555,7 @@ static inline int hmm_vma_fault(struct hmm_range *range, bool block)
>   	range->default_flags = 0;
>   	range->pfn_flags_mask = -1UL;
>   
> -	ret = hmm_range_register(range, range->vma->vm_mm,
> +	ret = hmm_range_register(range, mirror,
>   				 range->start, range->end,
>   				 PAGE_SHIFT);
>   	if (ret)
> diff --git a/mm/hmm.c b/mm/hmm.c
> index 547002f56a163d..8796447299023c 100644
> --- a/mm/hmm.c
> +++ b/mm/hmm.c
> @@ -925,13 +925,13 @@ static void hmm_pfns_clear(struct hmm_range *range,
>    * Track updates to the CPU page table see include/linux/hmm.h
>    */
>   int hmm_range_register(struct hmm_range *range,
> -		       struct mm_struct *mm,
> +		       struct hmm_mirror *mirror,
>   		       unsigned long start,
>   		       unsigned long end,
>   		       unsigned page_shift)
>   {
>   	unsigned long mask = ((1UL << page_shift) - 1UL);
> -	struct hmm *hmm;
> +	struct hmm *hmm = mirror->hmm;
>   
>   	range->valid = false;
>   	range->hmm = NULL;
> @@ -945,15 +945,12 @@ int hmm_range_register(struct hmm_range *range,
>   	range->start = start;
>   	range->end = end;
>   
> -	hmm = hmm_get_or_create(mm);
> -	if (!hmm)
> -		return -EFAULT;
> -
>   	/* Check if hmm_mm_destroy() was call. */
> -	if (hmm->mm == NULL || hmm->dead) {
> -		hmm_put(hmm);
> +	if (hmm->mm == NULL || hmm->dead)
>   		return -EFAULT;
> -	}
> +
> +	range->hmm = hmm;
> +	kref_get(&hmm->kref);
>   
>   	/* Initialize range to track CPU page table updates. */
>   	mutex_lock(&hmm->lock);
> 


  parent reply	other threads:[~2019-06-07 18:25 UTC|newest]

Thread overview: 79+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-06-06 18:44 [PATCH v2 hmm 00/11] Various revisions from a locking/code review Jason Gunthorpe
2019-06-06 18:44 ` [PATCH v2 hmm 01/11] mm/hmm: fix use after free with struct hmm in the mmu notifiers Jason Gunthorpe
2019-06-07  2:29   ` John Hubbard
2019-06-07 12:34     ` Jason Gunthorpe
2019-06-07 13:42       ` Jason Gunthorpe
2019-06-08  1:13       ` John Hubbard
2019-06-08  1:37       ` John Hubbard
2019-06-07 18:12   ` Ralph Campbell
2019-06-08  8:49   ` Christoph Hellwig
2019-06-08 11:33     ` Jason Gunthorpe
2019-06-06 18:44 ` [PATCH v2 hmm 02/11] mm/hmm: Use hmm_mirror not mm as an argument for hmm_range_register Jason Gunthorpe
2019-06-07  2:36   ` John Hubbard
2019-06-07 18:24   ` Ralph Campbell [this message]
2019-06-07 22:39     ` Ralph Campbell
2019-06-10 13:09       ` Jason Gunthorpe
2019-06-07 22:33   ` Ira Weiny
2019-06-08  8:54   ` Christoph Hellwig
2019-06-11 19:44     ` Jason Gunthorpe
2019-06-12  7:12       ` Christoph Hellwig
2019-06-12 11:41         ` Jason Gunthorpe
2019-06-12 12:11           ` Christoph Hellwig
2019-06-06 18:44 ` [PATCH v2 hmm 03/11] mm/hmm: Hold a mmgrab from hmm to mm Jason Gunthorpe
2019-06-07  2:44   ` John Hubbard
2019-06-07 12:36     ` Jason Gunthorpe
2019-06-07 18:41   ` Ralph Campbell
2019-06-07 18:51     ` Jason Gunthorpe
2019-06-07 22:38   ` Ira Weiny
2019-06-06 18:44 ` [PATCH v2 hmm 04/11] mm/hmm: Simplify hmm_get_or_create and make it reliable Jason Gunthorpe
2019-06-07  2:54   ` John Hubbard
2019-06-07 18:52   ` Ralph Campbell
2019-06-07 22:44   ` Ira Weiny
2019-06-06 18:44 ` [PATCH v2 hmm 05/11] mm/hmm: Remove duplicate condition test before wait_event_timeout Jason Gunthorpe
2019-06-07  3:06   ` John Hubbard
2019-06-07 12:47     ` Jason Gunthorpe
2019-06-07 13:31     ` [PATCH v3 " Jason Gunthorpe
2019-06-07 22:55       ` Ira Weiny
2019-06-08  1:32       ` John Hubbard
2019-06-07 19:01   ` [PATCH v2 " Ralph Campbell
2019-06-07 19:13     ` Jason Gunthorpe
2019-06-07 20:21       ` Ralph Campbell
2019-06-07 20:44         ` Jason Gunthorpe
2019-06-07 22:13           ` Ralph Campbell
2019-06-08  1:47             ` Jason Gunthorpe
2019-06-06 18:44 ` [PATCH v2 hmm 06/11] mm/hmm: Hold on to the mmget for the lifetime of the range Jason Gunthorpe
2019-06-07  3:15   ` John Hubbard
2019-06-07 20:29   ` Ralph Campbell
2019-06-06 18:44 ` [PATCH v2 hmm 07/11] mm/hmm: Use lockdep instead of comments Jason Gunthorpe
2019-06-07  3:19   ` John Hubbard
2019-06-07 20:31   ` Ralph Campbell
2019-06-07 22:16   ` Souptick Joarder
2019-06-06 18:44 ` [PATCH v2 hmm 08/11] mm/hmm: Remove racy protection against double-unregistration Jason Gunthorpe
2019-06-07  3:29   ` John Hubbard
2019-06-07 13:57     ` Jason Gunthorpe
2019-06-07 20:33   ` Ralph Campbell
2019-06-06 18:44 ` [PATCH v2 hmm 09/11] mm/hmm: Poison hmm_range during unregister Jason Gunthorpe
2019-06-07  3:37   ` John Hubbard
2019-06-07 14:03     ` Jason Gunthorpe
2019-06-07 20:46   ` Ralph Campbell
2019-06-07 20:49     ` Jason Gunthorpe
2019-06-07 23:01   ` Ira Weiny
2019-06-06 18:44 ` [PATCH v2 hmm 10/11] mm/hmm: Do not use list*_rcu() for hmm->ranges Jason Gunthorpe
2019-06-07  3:40   ` John Hubbard
2019-06-07 20:49   ` Ralph Campbell
2019-06-07 22:11   ` Souptick Joarder
2019-06-07 23:02   ` Ira Weiny
2019-06-06 18:44 ` [PATCH v2 hmm 11/11] mm/hmm: Remove confusing comment and logic from hmm_release Jason Gunthorpe
2019-06-07  3:47   ` John Hubbard
2019-06-07 12:58     ` Jason Gunthorpe
2019-06-07 21:37   ` Ralph Campbell
2019-06-08  2:12     ` Jason Gunthorpe
2019-06-10 16:02     ` Jason Gunthorpe
2019-06-10 22:03       ` Ralph Campbell
2019-06-07 16:05 ` [PATCH v2 12/11] mm/hmm: Fix error flows in hmm_invalidate_range_start Jason Gunthorpe
2019-06-07 23:52   ` Ralph Campbell
2019-06-08  1:35     ` Jason Gunthorpe
2019-06-11 19:48 ` [PATCH v2 hmm 00/11] Various revisions from a locking/code review Jason Gunthorpe
2019-06-12 17:54   ` Kuehling, Felix
2019-06-12 21:49     ` Yang, Philip
2019-06-13 17:50       ` Jason Gunthorpe

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=4a391bd4-287c-5f13-3bca-c6a46ff8d08c@nvidia.com \
    --to=rcampbell@nvidia.com \
    --cc=Felix.Kuehling@amd.com \
    --cc=aarcange@redhat.com \
    --cc=amd-gfx@lists.freedesktop.org \
    --cc=dri-devel@lists.freedesktop.org \
    --cc=jgg@mellanox.com \
    --cc=jgg@ziepe.ca \
    --cc=jglisse@redhat.com \
    --cc=jhubbard@nvidia.com \
    --cc=linux-mm@kvack.org \
    --cc=linux-rdma@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).