* [PATCH 2/7] mm/rmap: map_pte() was not handling private ZONE_DEVICE page properly
[not found] <20180824192549.30844-1-jglisse@redhat.com>
@ 2018-08-24 19:25 ` jglisse
2018-08-30 14:05 ` Balbir Singh
2018-08-30 14:41 ` [PATCH 3/7] mm/rmap: map_pte() was not handling private ZONE_DEVICE page properly v2 jglisse
2018-08-24 19:25 ` [PATCH 3/7] mm/hmm: fix race between hmm_mirror_unregister() and mmu_notifier callback jglisse
1 sibling, 2 replies; 9+ messages in thread
From: jglisse @ 2018-08-24 19:25 UTC (permalink / raw)
To: linux-mm
Cc: Andrew Morton, linux-kernel, Ralph Campbell,
Jérôme Glisse, Kirill A . Shutemov, stable
From: Ralph Campbell <rcampbell@nvidia.com>
Private ZONE_DEVICE pages use a special pte entry and thus are not
present. Properly handle this case in map_pte(), it is already handled
in check_pte(), the map_pte() part was lost in some rebase most probably.
Without this patch the slow migration path can not migrate back private
ZONE_DEVICE memory to regular memory. This was found after stress
testing migration back to system memory. This ultimatly can lead the
CPU to an infinite page fault loop on the special swap entry.
Signed-off-by: Ralph Campbell <rcampbell@nvidia.com>
Signed-off-by: Jérôme Glisse <jglisse@redhat.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Cc: stable@vger.kernel.org
---
mm/page_vma_mapped.c | 9 +++++++++
1 file changed, 9 insertions(+)
diff --git a/mm/page_vma_mapped.c b/mm/page_vma_mapped.c
index ae3c2a35d61b..1cf5b9bfb559 100644
--- a/mm/page_vma_mapped.c
+++ b/mm/page_vma_mapped.c
@@ -21,6 +21,15 @@ static bool map_pte(struct page_vma_mapped_walk *pvmw)
if (!is_swap_pte(*pvmw->pte))
return false;
} else {
+ if (is_swap_pte(*pvmw->pte)) {
+ swp_entry_t entry;
+
+ /* Handle un-addressable ZONE_DEVICE memory */
+ entry = pte_to_swp_entry(*pvmw->pte);
+ if (is_device_private_entry(entry))
+ return true;
+ }
+
if (!pte_present(*pvmw->pte))
return false;
}
--
2.17.1
^ permalink raw reply related [flat|nested] 9+ messages in thread
* [PATCH 3/7] mm/hmm: fix race between hmm_mirror_unregister() and mmu_notifier callback
[not found] <20180824192549.30844-1-jglisse@redhat.com>
2018-08-24 19:25 ` [PATCH 2/7] mm/rmap: map_pte() was not handling private ZONE_DEVICE page properly jglisse
@ 2018-08-24 19:25 ` jglisse
2018-08-30 14:14 ` Balbir Singh
1 sibling, 1 reply; 9+ messages in thread
From: jglisse @ 2018-08-24 19:25 UTC (permalink / raw)
To: linux-mm; +Cc: Andrew Morton, linux-kernel, Ralph Campbell, stable
From: Ralph Campbell <rcampbell@nvidia.com>
In hmm_mirror_unregister(), mm->hmm is set to NULL and then
mmu_notifier_unregister_no_release() is called. That creates a small
window where mmu_notifier can call mmu_notifier_ops with mm->hmm equal
to NULL. Fix this by first unregistering mmu notifier callbacks and
then setting mm->hmm to NULL.
Similarly in hmm_register(), set mm->hmm before registering mmu_notifier
callbacks so callback functions always see mm->hmm set.
Signed-off-by: Ralph Campbell <rcampbell@nvidia.com>
Reviewed-by: John Hubbard <jhubbard@nvidia.com>
Reviewed-by: Jérôme Glisse <jglisse@redhat.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: stable@vger.kernel.org
---
mm/hmm.c | 36 +++++++++++++++++++++---------------
1 file changed, 21 insertions(+), 15 deletions(-)
diff --git a/mm/hmm.c b/mm/hmm.c
index 9a068a1da487..a16678d08127 100644
--- a/mm/hmm.c
+++ b/mm/hmm.c
@@ -91,16 +91,6 @@ static struct hmm *hmm_register(struct mm_struct *mm)
spin_lock_init(&hmm->lock);
hmm->mm = mm;
- /*
- * We should only get here if hold the mmap_sem in write mode ie on
- * registration of first mirror through hmm_mirror_register()
- */
- hmm->mmu_notifier.ops = &hmm_mmu_notifier_ops;
- if (__mmu_notifier_register(&hmm->mmu_notifier, mm)) {
- kfree(hmm);
- return NULL;
- }
-
spin_lock(&mm->page_table_lock);
if (!mm->hmm)
mm->hmm = hmm;
@@ -108,12 +98,27 @@ static struct hmm *hmm_register(struct mm_struct *mm)
cleanup = true;
spin_unlock(&mm->page_table_lock);
- if (cleanup) {
- mmu_notifier_unregister(&hmm->mmu_notifier, mm);
- kfree(hmm);
- }
+ if (cleanup)
+ goto error;
+
+ /*
+ * We should only get here if hold the mmap_sem in write mode ie on
+ * registration of first mirror through hmm_mirror_register()
+ */
+ hmm->mmu_notifier.ops = &hmm_mmu_notifier_ops;
+ if (__mmu_notifier_register(&hmm->mmu_notifier, mm))
+ goto error_mm;
return mm->hmm;
+
+error_mm:
+ spin_lock(&mm->page_table_lock);
+ if (mm->hmm == hmm)
+ mm->hmm = NULL;
+ spin_unlock(&mm->page_table_lock);
+error:
+ kfree(hmm);
+ return NULL;
}
void hmm_mm_destroy(struct mm_struct *mm)
@@ -278,12 +283,13 @@ void hmm_mirror_unregister(struct hmm_mirror *mirror)
if (!should_unregister || mm == NULL)
return;
+ mmu_notifier_unregister_no_release(&hmm->mmu_notifier, mm);
+
spin_lock(&mm->page_table_lock);
if (mm->hmm == hmm)
mm->hmm = NULL;
spin_unlock(&mm->page_table_lock);
- mmu_notifier_unregister_no_release(&hmm->mmu_notifier, mm);
kfree(hmm);
}
EXPORT_SYMBOL(hmm_mirror_unregister);
--
2.17.1
^ permalink raw reply related [flat|nested] 9+ messages in thread
* Re: [PATCH 2/7] mm/rmap: map_pte() was not handling private ZONE_DEVICE page properly
2018-08-24 19:25 ` [PATCH 2/7] mm/rmap: map_pte() was not handling private ZONE_DEVICE page properly jglisse
@ 2018-08-30 14:05 ` Balbir Singh
2018-08-30 14:34 ` Jerome Glisse
2018-08-30 14:41 ` [PATCH 3/7] mm/rmap: map_pte() was not handling private ZONE_DEVICE page properly v2 jglisse
1 sibling, 1 reply; 9+ messages in thread
From: Balbir Singh @ 2018-08-30 14:05 UTC (permalink / raw)
To: jglisse
Cc: linux-mm, Andrew Morton, linux-kernel, Ralph Campbell,
Kirill A . Shutemov, stable
On Fri, Aug 24, 2018 at 03:25:44PM -0400, jglisse@redhat.com wrote:
> From: Ralph Campbell <rcampbell@nvidia.com>
>
> Private ZONE_DEVICE pages use a special pte entry and thus are not
> present. Properly handle this case in map_pte(), it is already handled
> in check_pte(), the map_pte() part was lost in some rebase most probably.
>
> Without this patch the slow migration path can not migrate back private
> ZONE_DEVICE memory to regular memory. This was found after stress
> testing migration back to system memory. This ultimatly can lead the
> CPU to an infinite page fault loop on the special swap entry.
>
> Signed-off-by: Ralph Campbell <rcampbell@nvidia.com>
> Signed-off-by: J�r�me Glisse <jglisse@redhat.com>
> Cc: Andrew Morton <akpm@linux-foundation.org>
> Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
> Cc: stable@vger.kernel.org
> ---
> mm/page_vma_mapped.c | 9 +++++++++
> 1 file changed, 9 insertions(+)
>
> diff --git a/mm/page_vma_mapped.c b/mm/page_vma_mapped.c
> index ae3c2a35d61b..1cf5b9bfb559 100644
> --- a/mm/page_vma_mapped.c
> +++ b/mm/page_vma_mapped.c
> @@ -21,6 +21,15 @@ static bool map_pte(struct page_vma_mapped_walk *pvmw)
> if (!is_swap_pte(*pvmw->pte))
> return false;
> } else {
> + if (is_swap_pte(*pvmw->pte)) {
> + swp_entry_t entry;
> +
> + /* Handle un-addressable ZONE_DEVICE memory */
> + entry = pte_to_swp_entry(*pvmw->pte);
> + if (is_device_private_entry(entry))
> + return true;
> + }
> +
This happens just for !PVMW_SYNC && PVMW_MIGRATION? I presume this
is triggered via the remove_migration_pte() code path? Doesn't
returning true here imply that we've taken the ptl lock for the
pvmw?
Balbir
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH 3/7] mm/hmm: fix race between hmm_mirror_unregister() and mmu_notifier callback
2018-08-24 19:25 ` [PATCH 3/7] mm/hmm: fix race between hmm_mirror_unregister() and mmu_notifier callback jglisse
@ 2018-08-30 14:14 ` Balbir Singh
0 siblings, 0 replies; 9+ messages in thread
From: Balbir Singh @ 2018-08-30 14:14 UTC (permalink / raw)
To: jglisse; +Cc: linux-mm, Andrew Morton, linux-kernel, Ralph Campbell, stable
On Fri, Aug 24, 2018 at 03:25:45PM -0400, jglisse@redhat.com wrote:
> From: Ralph Campbell <rcampbell@nvidia.com>
>
> In hmm_mirror_unregister(), mm->hmm is set to NULL and then
> mmu_notifier_unregister_no_release() is called. That creates a small
> window where mmu_notifier can call mmu_notifier_ops with mm->hmm equal
> to NULL. Fix this by first unregistering mmu notifier callbacks and
> then setting mm->hmm to NULL.
>
> Similarly in hmm_register(), set mm->hmm before registering mmu_notifier
> callbacks so callback functions always see mm->hmm set.
>
> Signed-off-by: Ralph Campbell <rcampbell@nvidia.com>
> Reviewed-by: John Hubbard <jhubbard@nvidia.com>
> Reviewed-by: J�r�me Glisse <jglisse@redhat.com>
> Cc: Andrew Morton <akpm@linux-foundation.org>
> Cc: stable@vger.kernel.org
Reviewed-by: Balbir Singh <bsingharora@gmail.com>
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH 2/7] mm/rmap: map_pte() was not handling private ZONE_DEVICE page properly
2018-08-30 14:05 ` Balbir Singh
@ 2018-08-30 14:34 ` Jerome Glisse
0 siblings, 0 replies; 9+ messages in thread
From: Jerome Glisse @ 2018-08-30 14:34 UTC (permalink / raw)
To: Balbir Singh
Cc: linux-mm, Andrew Morton, linux-kernel, Ralph Campbell,
Kirill A . Shutemov, stable
On Fri, Aug 31, 2018 at 12:05:38AM +1000, Balbir Singh wrote:
> On Fri, Aug 24, 2018 at 03:25:44PM -0400, jglisse@redhat.com wrote:
> > From: Ralph Campbell <rcampbell@nvidia.com>
> >
> > Private ZONE_DEVICE pages use a special pte entry and thus are not
> > present. Properly handle this case in map_pte(), it is already handled
> > in check_pte(), the map_pte() part was lost in some rebase most probably.
> >
> > Without this patch the slow migration path can not migrate back private
> > ZONE_DEVICE memory to regular memory. This was found after stress
> > testing migration back to system memory. This ultimatly can lead the
> > CPU to an infinite page fault loop on the special swap entry.
> >
> > Signed-off-by: Ralph Campbell <rcampbell@nvidia.com>
> > Signed-off-by: J�r�me Glisse <jglisse@redhat.com>
> > Cc: Andrew Morton <akpm@linux-foundation.org>
> > Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
> > Cc: stable@vger.kernel.org
> > ---
> > mm/page_vma_mapped.c | 9 +++++++++
> > 1 file changed, 9 insertions(+)
> >
> > diff --git a/mm/page_vma_mapped.c b/mm/page_vma_mapped.c
> > index ae3c2a35d61b..1cf5b9bfb559 100644
> > --- a/mm/page_vma_mapped.c
> > +++ b/mm/page_vma_mapped.c
> > @@ -21,6 +21,15 @@ static bool map_pte(struct page_vma_mapped_walk *pvmw)
> > if (!is_swap_pte(*pvmw->pte))
> > return false;
> > } else {
> > + if (is_swap_pte(*pvmw->pte)) {
> > + swp_entry_t entry;
> > +
> > + /* Handle un-addressable ZONE_DEVICE memory */
> > + entry = pte_to_swp_entry(*pvmw->pte);
> > + if (is_device_private_entry(entry))
> > + return true;
> > + }
> > +
>
> This happens just for !PVMW_SYNC && PVMW_MIGRATION? I presume this
> is triggered via the remove_migration_pte() code path? Doesn't
> returning true here imply that we've taken the ptl lock for the
> pvmw?
This happens through try_to_unmap() from migrate_vma_unmap() and thus
has !PVMW_SYNC and !PVMW_MIGRATION
But you are right about the ptl lock, so looking at code we were just
doing pte modification without holding the pte lock but the
page_vma_mapped_walk() would not try to unlock as pvmw->ptl == NULL
so this never triggered any warning.
I am gonna post a v2 shortly which address that.
Cheers,
J�r�me
^ permalink raw reply [flat|nested] 9+ messages in thread
* [PATCH 3/7] mm/rmap: map_pte() was not handling private ZONE_DEVICE page properly v2
2018-08-24 19:25 ` [PATCH 2/7] mm/rmap: map_pte() was not handling private ZONE_DEVICE page properly jglisse
2018-08-30 14:05 ` Balbir Singh
@ 2018-08-30 14:41 ` jglisse
2018-08-31 9:27 ` Balbir Singh
1 sibling, 1 reply; 9+ messages in thread
From: jglisse @ 2018-08-30 14:41 UTC (permalink / raw)
To: linux-mm
Cc: Andrew Morton, linux-kernel, Ralph Campbell,
Jérôme Glisse, Kirill A . Shutemov, Balbir Singh,
stable
From: Ralph Campbell <rcampbell@nvidia.com>
Private ZONE_DEVICE pages use a special pte entry and thus are not
present. Properly handle this case in map_pte(), it is already handled
in check_pte(), the map_pte() part was lost in some rebase most probably.
Without this patch the slow migration path can not migrate back private
ZONE_DEVICE memory to regular memory. This was found after stress
testing migration back to system memory. This ultimatly can lead the
CPU to an infinite page fault loop on the special swap entry.
Changes since v1:
- properly lock pte directory in map_pte()
Signed-off-by: Ralph Campbell <rcampbell@nvidia.com>
Signed-off-by: Jérôme Glisse <jglisse@redhat.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Cc: Balbir Singh <bsingharora@gmail.com>
Cc: stable@vger.kernel.org
---
mm/page_vma_mapped.c | 9 ++++++++-
1 file changed, 8 insertions(+), 1 deletion(-)
diff --git a/mm/page_vma_mapped.c b/mm/page_vma_mapped.c
index ae3c2a35d61b..bd67e23dce33 100644
--- a/mm/page_vma_mapped.c
+++ b/mm/page_vma_mapped.c
@@ -21,7 +21,14 @@ static bool map_pte(struct page_vma_mapped_walk *pvmw)
if (!is_swap_pte(*pvmw->pte))
return false;
} else {
- if (!pte_present(*pvmw->pte))
+ if (is_swap_pte(*pvmw->pte)) {
+ swp_entry_t entry;
+
+ /* Handle un-addressable ZONE_DEVICE memory */
+ entry = pte_to_swp_entry(*pvmw->pte);
+ if (!is_device_private_entry(entry))
+ return false;
+ } else if (!pte_present(*pvmw->pte))
return false;
}
}
--
2.17.1
^ permalink raw reply related [flat|nested] 9+ messages in thread
* Re: [PATCH 3/7] mm/rmap: map_pte() was not handling private ZONE_DEVICE page properly v2
2018-08-30 14:41 ` [PATCH 3/7] mm/rmap: map_pte() was not handling private ZONE_DEVICE page properly v2 jglisse
@ 2018-08-31 9:27 ` Balbir Singh
2018-08-31 16:19 ` Jerome Glisse
0 siblings, 1 reply; 9+ messages in thread
From: Balbir Singh @ 2018-08-31 9:27 UTC (permalink / raw)
To: jglisse
Cc: linux-mm, Andrew Morton, linux-kernel, Ralph Campbell,
Kirill A . Shutemov, stable
On Thu, Aug 30, 2018 at 10:41:56AM -0400, jglisse@redhat.com wrote:
> From: Ralph Campbell <rcampbell@nvidia.com>
>
> Private ZONE_DEVICE pages use a special pte entry and thus are not
> present. Properly handle this case in map_pte(), it is already handled
> in check_pte(), the map_pte() part was lost in some rebase most probably.
>
> Without this patch the slow migration path can not migrate back private
> ZONE_DEVICE memory to regular memory. This was found after stress
> testing migration back to system memory. This ultimatly can lead the
> CPU to an infinite page fault loop on the special swap entry.
>
> Changes since v1:
> - properly lock pte directory in map_pte()
>
> Signed-off-by: Ralph Campbell <rcampbell@nvidia.com>
> Signed-off-by: J�r�me Glisse <jglisse@redhat.com>
> Cc: Andrew Morton <akpm@linux-foundation.org>
> Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
> Cc: Balbir Singh <bsingharora@gmail.com>
> Cc: stable@vger.kernel.org
> ---
> mm/page_vma_mapped.c | 9 ++++++++-
> 1 file changed, 8 insertions(+), 1 deletion(-)
>
> diff --git a/mm/page_vma_mapped.c b/mm/page_vma_mapped.c
> index ae3c2a35d61b..bd67e23dce33 100644
> --- a/mm/page_vma_mapped.c
> +++ b/mm/page_vma_mapped.c
> @@ -21,7 +21,14 @@ static bool map_pte(struct page_vma_mapped_walk *pvmw)
> if (!is_swap_pte(*pvmw->pte))
> return false;
> } else {
> - if (!pte_present(*pvmw->pte))
> + if (is_swap_pte(*pvmw->pte)) {
> + swp_entry_t entry;
> +
> + /* Handle un-addressable ZONE_DEVICE memory */
> + entry = pte_to_swp_entry(*pvmw->pte);
> + if (!is_device_private_entry(entry))
> + return false;
OK, so we skip this pte from unmap since it's already unmapped? This prevents
try_to_unmap from unmapping it and it gets restored with MIGRATE_PFN_MIGRATE
flag cleared?
Sounds like the right thing, if I understand it correctly
Acked-by: Balbir Singh <bsingharora@gmail.com>
Balbir Singh.
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH 3/7] mm/rmap: map_pte() was not handling private ZONE_DEVICE page properly v2
2018-08-31 9:27 ` Balbir Singh
@ 2018-08-31 16:19 ` Jerome Glisse
2018-09-02 6:58 ` Balbir Singh
0 siblings, 1 reply; 9+ messages in thread
From: Jerome Glisse @ 2018-08-31 16:19 UTC (permalink / raw)
To: Balbir Singh
Cc: linux-mm, Andrew Morton, linux-kernel, Ralph Campbell,
Kirill A . Shutemov, stable
On Fri, Aug 31, 2018 at 07:27:24PM +1000, Balbir Singh wrote:
> On Thu, Aug 30, 2018 at 10:41:56AM -0400, jglisse@redhat.com wrote:
> > From: Ralph Campbell <rcampbell@nvidia.com>
> >
> > Private ZONE_DEVICE pages use a special pte entry and thus are not
> > present. Properly handle this case in map_pte(), it is already handled
> > in check_pte(), the map_pte() part was lost in some rebase most probably.
> >
> > Without this patch the slow migration path can not migrate back private
> > ZONE_DEVICE memory to regular memory. This was found after stress
> > testing migration back to system memory. This ultimatly can lead the
> > CPU to an infinite page fault loop on the special swap entry.
> >
> > Changes since v1:
> > - properly lock pte directory in map_pte()
> >
> > Signed-off-by: Ralph Campbell <rcampbell@nvidia.com>
> > Signed-off-by: J�r�me Glisse <jglisse@redhat.com>
> > Cc: Andrew Morton <akpm@linux-foundation.org>
> > Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
> > Cc: Balbir Singh <bsingharora@gmail.com>
> > Cc: stable@vger.kernel.org
> > ---
> > mm/page_vma_mapped.c | 9 ++++++++-
> > 1 file changed, 8 insertions(+), 1 deletion(-)
> >
> > diff --git a/mm/page_vma_mapped.c b/mm/page_vma_mapped.c
> > index ae3c2a35d61b..bd67e23dce33 100644
> > --- a/mm/page_vma_mapped.c
> > +++ b/mm/page_vma_mapped.c
> > @@ -21,7 +21,14 @@ static bool map_pte(struct page_vma_mapped_walk *pvmw)
> > if (!is_swap_pte(*pvmw->pte))
> > return false;
> > } else {
> > - if (!pte_present(*pvmw->pte))
> > + if (is_swap_pte(*pvmw->pte)) {
> > + swp_entry_t entry;
> > +
> > + /* Handle un-addressable ZONE_DEVICE memory */
> > + entry = pte_to_swp_entry(*pvmw->pte);
> > + if (!is_device_private_entry(entry))
> > + return false;
>
> OK, so we skip this pte from unmap since it's already unmapped? This prevents
> try_to_unmap from unmapping it and it gets restored with MIGRATE_PFN_MIGRATE
> flag cleared?
>
> Sounds like the right thing, if I understand it correctly
Well not exactly we do not skip it, we replace it with a migration
pte see try_to_unmap_one() which get call with TTU_MIGRATION flag
set (which do not translate in PVMW_MIGRATION being set on contrary).
>From migration point of view even if this is a swap pte, it is still
a valid mapping of the page and is counted as such for all intent and
purposes. The only thing we don't need is flushing CPU tlb or cache.
So this all happens when we are migrating something back to regular
memory either because of CPU fault or because the device driver want
to make room in its memory and decided to evict that page back to
regular memory.
Cheers,
J�r�me
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH 3/7] mm/rmap: map_pte() was not handling private ZONE_DEVICE page properly v2
2018-08-31 16:19 ` Jerome Glisse
@ 2018-09-02 6:58 ` Balbir Singh
0 siblings, 0 replies; 9+ messages in thread
From: Balbir Singh @ 2018-09-02 6:58 UTC (permalink / raw)
To: Jerome Glisse
Cc: linux-mm, Andrew Morton, linux-kernel, Ralph Campbell,
Kirill A . Shutemov, stable
On Fri, Aug 31, 2018 at 12:19:35PM -0400, Jerome Glisse wrote:
> On Fri, Aug 31, 2018 at 07:27:24PM +1000, Balbir Singh wrote:
> > On Thu, Aug 30, 2018 at 10:41:56AM -0400, jglisse@redhat.com wrote:
> > > From: Ralph Campbell <rcampbell@nvidia.com>
> > >
> > > Private ZONE_DEVICE pages use a special pte entry and thus are not
> > > present. Properly handle this case in map_pte(), it is already handled
> > > in check_pte(), the map_pte() part was lost in some rebase most probably.
> > >
> > > Without this patch the slow migration path can not migrate back private
> > > ZONE_DEVICE memory to regular memory. This was found after stress
> > > testing migration back to system memory. This ultimatly can lead the
> > > CPU to an infinite page fault loop on the special swap entry.
> > >
> > > Changes since v1:
> > > - properly lock pte directory in map_pte()
> > >
> > > Signed-off-by: Ralph Campbell <rcampbell@nvidia.com>
> > > Signed-off-by: J�r�me Glisse <jglisse@redhat.com>
> > > Cc: Andrew Morton <akpm@linux-foundation.org>
> > > Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
> > > Cc: Balbir Singh <bsingharora@gmail.com>
> > > Cc: stable@vger.kernel.org
> > > ---
> > > mm/page_vma_mapped.c | 9 ++++++++-
> > > 1 file changed, 8 insertions(+), 1 deletion(-)
> > >
> > > diff --git a/mm/page_vma_mapped.c b/mm/page_vma_mapped.c
> > > index ae3c2a35d61b..bd67e23dce33 100644
> > > --- a/mm/page_vma_mapped.c
> > > +++ b/mm/page_vma_mapped.c
> > > @@ -21,7 +21,14 @@ static bool map_pte(struct page_vma_mapped_walk *pvmw)
> > > if (!is_swap_pte(*pvmw->pte))
> > > return false;
> > > } else {
> > > - if (!pte_present(*pvmw->pte))
> > > + if (is_swap_pte(*pvmw->pte)) {
> > > + swp_entry_t entry;
> > > +
> > > + /* Handle un-addressable ZONE_DEVICE memory */
> > > + entry = pte_to_swp_entry(*pvmw->pte);
> > > + if (!is_device_private_entry(entry))
> > > + return false;
> >
> > OK, so we skip this pte from unmap since it's already unmapped? This prevents
> > try_to_unmap from unmapping it and it gets restored with MIGRATE_PFN_MIGRATE
> > flag cleared?
> >
> > Sounds like the right thing, if I understand it correctly
>
> Well not exactly we do not skip it, we replace it with a migration
I think I missed the !is_device_private_entry and missed the ! part,
so that seems reasonable
Reviewed-by: Balbir Singh <bsingharora@gmail.com>
^ permalink raw reply [flat|nested] 9+ messages in thread
end of thread, other threads:[~2018-09-02 6:58 UTC | newest]
Thread overview: 9+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
[not found] <20180824192549.30844-1-jglisse@redhat.com>
2018-08-24 19:25 ` [PATCH 2/7] mm/rmap: map_pte() was not handling private ZONE_DEVICE page properly jglisse
2018-08-30 14:05 ` Balbir Singh
2018-08-30 14:34 ` Jerome Glisse
2018-08-30 14:41 ` [PATCH 3/7] mm/rmap: map_pte() was not handling private ZONE_DEVICE page properly v2 jglisse
2018-08-31 9:27 ` Balbir Singh
2018-08-31 16:19 ` Jerome Glisse
2018-09-02 6:58 ` Balbir Singh
2018-08-24 19:25 ` [PATCH 3/7] mm/hmm: fix race between hmm_mirror_unregister() and mmu_notifier callback jglisse
2018-08-30 14:14 ` Balbir Singh
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).