From: David Hildenbrand <david@redhat.com>
To: Baolin Wang <baolin.wang@linux.alibaba.com>,
akpm@linux-foundation.org, mike.kravetz@oracle.com
Cc: dalias@libc.org, linux-ia64@vger.kernel.org,
linux-sh@vger.kernel.org, linux-kernel@vger.kernel.org,
James.Bottomley@HansenPartnership.com, linux-mm@kvack.org,
paulus@samba.org, sparclinux@vger.kernel.org,
agordeev@linux.ibm.com, will@kernel.org,
linux-arch@vger.kernel.org, linux-s390@vger.kernel.org,
arnd@arndb.de, ysato@users.sourceforge.jp, deller@gmx.de,
catalin.marinas@arm.com, borntraeger@linux.ibm.com,
gor@linux.ibm.com, hca@linux.ibm.com, songmuchun@bytedance.com,
linux-arm-kernel@lists.infradead.org, tsbogend@alpha.franken.de,
linux-parisc@vger.kernel.org, linux-mips@vger.kernel.org,
svens@linux.ibm.com, linuxppc-dev@lists.ozlabs.org,
davem@davemloft.net
Subject: Re: [PATCH v4 3/3] mm: rmap: Fix CONT-PTE/PMD size hugetlb issue when unmapping
Date: Wed, 11 May 2022 19:35:50 +0200 [thread overview]
Message-ID: <f1c904e7-0b16-2893-eb25-0b968817fb8c@redhat.com> (raw)
In-Reply-To: <0a2e547238cad5bc153a85c3e9658cb9d55f9cac.1652270205.git.baolin.wang@linux.alibaba.com>
On 11.05.22 14:04, Baolin Wang wrote:
> On some architectures (like ARM64), it can support CONT-PTE/PMD size
> hugetlb, which means it can support not only PMD/PUD size hugetlb:
> 2M and 1G, but also CONT-PTE/PMD size: 64K and 32M if a 4K page
> size specified.
>
> When unmapping a hugetlb page, we will get the relevant page table
> entry by huge_pte_offset() only once to nuke it. This is correct
> for PMD or PUD size hugetlb, since they always contain only one
> pmd entry or pud entry in the page table.
>
> However this is incorrect for CONT-PTE and CONT-PMD size hugetlb,
> since they can contain several continuous pte or pmd entry with
> same page table attributes, so we will nuke only one pte or pmd
> entry for this CONT-PTE/PMD size hugetlb page.
>
> And now try_to_unmap() is only passed a hugetlb page in the case
> where the hugetlb page is poisoned. Which means now we will unmap
> only one pte entry for a CONT-PTE or CONT-PMD size poisoned hugetlb
> page, and we can still access other subpages of a CONT-PTE or CONT-PMD
> size poisoned hugetlb page, which will cause serious issues possibly.
>
> So we should change to use huge_ptep_clear_flush() to nuke the
> hugetlb page table to fix this issue, which already considered
> CONT-PTE and CONT-PMD size hugetlb.
>
> We've already used set_huge_swap_pte_at() to set a poisoned
> swap entry for a poisoned hugetlb page. Meanwhile adding a VM_BUG_ON()
> to make sure the passed hugetlb page is poisoned in try_to_unmap().
>
> Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com>
> Reviewed-by: Muchun Song <songmuchun@bytedance.com>
> Reviewed-by: Mike Kravetz <mike.kravetz@oracle.com>
> ---
> mm/rmap.c | 39 ++++++++++++++++++++++-----------------
> 1 file changed, 22 insertions(+), 17 deletions(-)
>
> diff --git a/mm/rmap.c b/mm/rmap.c
> index 4e96daf..219e287 100644
> --- a/mm/rmap.c
> +++ b/mm/rmap.c
> @@ -1528,6 +1528,11 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma,
>
> if (folio_test_hugetlb(folio)) {
> /*
> + * The try_to_unmap() is only passed a hugetlb page
> + * in the case where the hugetlb page is poisoned.
> + */
> + VM_BUG_ON_PAGE(!PageHWPoison(subpage), subpage);
> + /*
> * huge_pmd_unshare may unmap an entire PMD page.
> * There is no way of knowing exactly which PMDs may
> * be cached for this mm, so we must flush them all.
> @@ -1562,28 +1567,28 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma,
> break;
> }
> }
> + pteval = huge_ptep_clear_flush(vma, address, pvmw.pte);
> } else {
> flush_cache_page(vma, address, pte_pfn(*pvmw.pte));
> - }
> -
> - /*
> - * Nuke the page table entry. When having to clear
> - * PageAnonExclusive(), we always have to flush.
> - */
> - if (should_defer_flush(mm, flags) && !anon_exclusive) {
> /*
> - * We clear the PTE but do not flush so potentially
> - * a remote CPU could still be writing to the folio.
> - * If the entry was previously clean then the
> - * architecture must guarantee that a clear->dirty
> - * transition on a cached TLB entry is written through
> - * and traps if the PTE is unmapped.
> + * Nuke the page table entry. When having to clear
> + * PageAnonExclusive(), we always have to flush.
> */
> - pteval = ptep_get_and_clear(mm, address, pvmw.pte);
> + if (should_defer_flush(mm, flags) && !anon_exclusive) {
> + /*
> + * We clear the PTE but do not flush so potentially
> + * a remote CPU could still be writing to the folio.
> + * If the entry was previously clean then the
> + * architecture must guarantee that a clear->dirty
> + * transition on a cached TLB entry is written through
> + * and traps if the PTE is unmapped.
> + */
> + pteval = ptep_get_and_clear(mm, address, pvmw.pte);
>
> - set_tlb_ubc_flush_pending(mm, pte_dirty(pteval));
> - } else {
> - pteval = ptep_clear_flush(vma, address, pvmw.pte);
> + set_tlb_ubc_flush_pending(mm, pte_dirty(pteval));
> + } else {
> + pteval = ptep_clear_flush(vma, address, pvmw.pte);
> + }
> }
>
> /*
LGTM
Acked-by: David Hildenbrand <david@redhat.com>
--
Thanks,
David / dhildenb
prev parent reply other threads:[~2022-05-11 17:36 UTC|newest]
Thread overview: 6+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-05-11 12:04 [PATCH v4 0/3] Fix CONT-PTE/PMD size hugetlb issue when unmapping or migrating Baolin Wang
2022-05-11 12:04 ` [PATCH v4 1/3] mm: change huge_ptep_clear_flush() to return the original pte Baolin Wang
2022-05-11 12:04 ` [PATCH v4 2/3] mm: rmap: Fix CONT-PTE/PMD size hugetlb issue when migration Baolin Wang
2022-05-11 17:27 ` David Hildenbrand
2022-05-11 12:04 ` [PATCH v4 3/3] mm: rmap: Fix CONT-PTE/PMD size hugetlb issue when unmapping Baolin Wang
2022-05-11 17:35 ` David Hildenbrand [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=f1c904e7-0b16-2893-eb25-0b968817fb8c@redhat.com \
--to=david@redhat.com \
--cc=James.Bottomley@HansenPartnership.com \
--cc=agordeev@linux.ibm.com \
--cc=akpm@linux-foundation.org \
--cc=arnd@arndb.de \
--cc=baolin.wang@linux.alibaba.com \
--cc=borntraeger@linux.ibm.com \
--cc=catalin.marinas@arm.com \
--cc=dalias@libc.org \
--cc=davem@davemloft.net \
--cc=deller@gmx.de \
--cc=gor@linux.ibm.com \
--cc=hca@linux.ibm.com \
--cc=linux-arch@vger.kernel.org \
--cc=linux-arm-kernel@lists.infradead.org \
--cc=linux-ia64@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mips@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=linux-parisc@vger.kernel.org \
--cc=linux-s390@vger.kernel.org \
--cc=linux-sh@vger.kernel.org \
--cc=linuxppc-dev@lists.ozlabs.org \
--cc=mike.kravetz@oracle.com \
--cc=paulus@samba.org \
--cc=songmuchun@bytedance.com \
--cc=sparclinux@vger.kernel.org \
--cc=svens@linux.ibm.com \
--cc=tsbogend@alpha.franken.de \
--cc=will@kernel.org \
--cc=ysato@users.sourceforge.jp \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).