From: Hugh Dickins <hughd@google.com>
To: Jann Horn <jannh@google.com>
Cc: Miaohe Lin <linmiaohe@huawei.com>,
David Hildenbrand <david@redhat.com>,
Peter Zijlstra <peterz@infradead.org>,
Yang Shi <shy828301@gmail.com>, Peter Xu <peterx@redhat.com>,
linux-kernel@vger.kernel.org, Song Liu <song@kernel.org>,
sparclinux@vger.kernel.org,
Alexander Gordeev <agordeev@linux.ibm.com>,
Claudio Imbrenda <imbrenda@linux.ibm.com>,
Will Deacon <will@kernel.org>,
linux-s390@vger.kernel.org, Yu Zhao <yuzhao@google.com>,
Ira Weiny <ira.weiny@intel.com>,
Alistair Popple <apopple@nvidia.com>,
Hugh Dickins <hughd@google.com>,
Russell King <linux@armlinux.org.uk>,
Matthew Wilcox <willy@infradead.org>,
Steven Price <steven.price@arm.com>,
Christoph Hellwig <hch@infradead.org>,
Jason Gunthorpe <jgg@ziepe.ca>,
"Aneesh Kumar K.V" <aneesh.kumar@linux.ibm.com>,
Axel Rasmussen <axelrasmussen@google.com>,
Christian Borntraeger <borntraeger@linux.ibm.com>,
Thomas Hellstrom <thomas.hellstrom@linux.intel.com>,
Ralph Campbell <rcampbell@nvidia.com>,
Pasha Tatashin <pasha.tatashin@solee n.com>,
Anshuman Khandual <anshuman.khandual@arm.com>,
Heiko Carstens <hca@linux.ibm.com>,
Qi Zheng <zhengqi.arch@bytedance.com>,
Suren Baghdasaryan <surenb@google.com>,
linux-arm-kernel@lists.infradead.org,
SeongJae Park <sj@kernel.org>,
linux-mm@kvack.org, linuxppc-dev@lists.ozlabs.org,
Naoya Horiguchi <naoya.horiguchi@nec.com>,
Zack Rusin <zackr@vmware.com>, Minchan Kim <minchan@kernel.org>,
"Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>,
Andrew Morton <akpm@linux-foundation.org>,
Mel Gorman <mgorman@techsingularity.net>,
"David S. Miller" <davem@davemloft.net>,
Mike Rapoport <rppt@kernel.org>,
Mike Kravetz <mike.kravetz@oracle.com>
Subject: Re: [PATCH 10/12] mm/khugepaged: collapse_pte_mapped_thp() with mmap_read_lock()
Date: Thu, 1 Jun 2023 22:11:25 -0700 (PDT) [thread overview]
Message-ID: <dad171e1-cacf-e430-e91f-649ebeab605b@google.com> (raw)
In-Reply-To: <CAG48ez2X5oZyxaFniZ-HeGHDGjNuPBewGTjZLEHPWkBbBCaigg@mail.gmail.com>
[-- Attachment #1: Type: text/plain, Size: 3917 bytes --]
On Wed, 31 May 2023, Jann Horn wrote:
> On Mon, May 29, 2023 at 8:26 AM Hugh Dickins <hughd@google.com> wrote:
> > Bring collapse_and_free_pmd() back into collapse_pte_mapped_thp().
> > It does need mmap_read_lock(), but it does not need mmap_write_lock(),
> > nor vma_start_write() nor i_mmap lock nor anon_vma lock. All racing
> > paths are relying on pte_offset_map_lock() and pmd_lock(), so use those.
>
> I think there's a weirdness in the existing code, and this change
> probably turns that into a UAF bug.
>
> collapse_pte_mapped_thp() can be called on an address that might not
> be associated with a VMA anymore, and after this change, the page
> tables for that address might be in the middle of page table teardown
> in munmap(), right? The existing mmap_write_lock() guards against
> concurrent munmap() (so in the old code we are guaranteed to either
> see a normal VMA or not see the page tables anymore), but
> mmap_read_lock() only guards against the part of munmap() up to the
> mmap_write_downgrade() in do_vmi_align_munmap(), and unmap_region()
> (including free_pgtables()) happens after that.
Excellent point, thank you. Don't let anyone overhear us, but I have
to confess to you that that mmap_write_downgrade() has never impinged
forcefully enough on my consciousness: it's still my habit to think of
mmap_lock as exclusive over free_pgtables(), and I've not encountered
this bug in my testing.
Right, I'll gladly incorporate your collapse_pte_mapped_thp()
rearrangement below. And am reassured to realize that by removing
mmap_lock dependence elsewhere, I won't have got it wrong in other places.
Thanks,
Hugh
>
> So we can now enter collapse_pte_mapped_thp() and race with concurrent
> free_pgtables() such that a PUD disappears under us while we're
> walking it or something like that:
>
>
> int collapse_pte_mapped_thp(struct mm_struct *mm, unsigned long addr,
> bool install_pmd)
> {
> struct mmu_notifier_range range;
> unsigned long haddr = addr & HPAGE_PMD_MASK;
> struct vm_area_struct *vma = vma_lookup(mm, haddr); // <<< returns NULL
> struct page *hpage;
> pte_t *start_pte, *pte;
> pmd_t *pmd, pgt_pmd;
> spinlock_t *pml, *ptl;
> int nr_ptes = 0, result = SCAN_FAIL;
> int i;
>
> mmap_assert_locked(mm);
>
> /* Fast check before locking page if already PMD-mapped */
> result = find_pmd_or_thp_or_none(mm, haddr, &pmd); // <<< PUD UAF in here
> if (result == SCAN_PMD_MAPPED)
> return result;
>
> if (!vma || !vma->vm_file || // <<< bailout happens too late
> !range_in_vma(vma, haddr, haddr + HPAGE_PMD_SIZE))
> return SCAN_VMA_CHECK;
>
>
> I guess the right fix here is to make sure that at least the basic VMA
> revalidation stuff (making sure there still is a VMA covering this
> range) happens before find_pmd_or_thp_or_none()? Like:
>
>
> diff --git a/mm/khugepaged.c b/mm/khugepaged.c
> index 301c0e54a2ef..5db365587556 100644
> --- a/mm/khugepaged.c
> +++ b/mm/khugepaged.c
> @@ -1481,15 +1481,15 @@ int collapse_pte_mapped_thp(struct mm_struct
> *mm, unsigned long addr,
>
> mmap_assert_locked(mm);
>
> + if (!vma || !vma->vm_file ||
> + !range_in_vma(vma, haddr, haddr + HPAGE_PMD_SIZE))
> + return SCAN_VMA_CHECK;
> +
> /* Fast check before locking page if already PMD-mapped */
> result = find_pmd_or_thp_or_none(mm, haddr, &pmd);
> if (result == SCAN_PMD_MAPPED)
> return result;
>
> - if (!vma || !vma->vm_file ||
> - !range_in_vma(vma, haddr, haddr + HPAGE_PMD_SIZE))
> - return SCAN_VMA_CHECK;
> -
> /*
> * If we are here, we've succeeded in replacing all the native pages
> * in the page cache with a single hugepage. If a mm were to fault-in
>
next prev parent reply other threads:[~2023-06-02 5:12 UTC|newest]
Thread overview: 50+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-05-29 6:11 [PATCH 00/12] mm: free retracted page table by RCU Hugh Dickins
2023-05-29 6:14 ` [PATCH 01/12] mm/pgtable: add rcu_read_lock() and rcu_read_unlock()s Hugh Dickins
2023-05-31 17:06 ` Jann Horn
2023-06-02 2:50 ` Hugh Dickins
2023-06-02 14:21 ` Jann Horn
2023-05-29 6:16 ` [PATCH 02/12] mm/pgtable: add PAE safety to __pte_offset_map() Hugh Dickins
2023-05-29 13:56 ` Matthew Wilcox
[not found] ` <ZHeg3oRljRn6wlLX@ziepe.ca>
2023-06-02 5:35 ` Hugh Dickins
2023-05-29 6:17 ` [PATCH 03/12] arm: adjust_pte() use pte_offset_map_nolock() Hugh Dickins
2023-05-29 6:18 ` [PATCH 04/12] powerpc: assert_pte_locked() " Hugh Dickins
2023-05-29 6:20 ` [PATCH 05/12] powerpc: add pte_free_defer() for pgtables sharing page Hugh Dickins
2023-05-29 14:02 ` Matthew Wilcox
2023-05-29 14:36 ` Hugh Dickins
2023-06-01 13:57 ` Gerald Schaefer
2023-06-02 6:38 ` Hugh Dickins
2023-06-02 14:20 ` Jason Gunthorpe
2023-06-06 3:40 ` Hugh Dickins
2023-06-06 18:23 ` Jason Gunthorpe
2023-06-06 19:03 ` Peter Xu
2023-06-06 19:08 ` Jason Gunthorpe
2023-06-07 3:49 ` Hugh Dickins
2023-05-29 6:21 ` [PATCH 06/12] sparc: " Hugh Dickins
2023-06-06 3:46 ` Hugh Dickins
2023-05-29 6:22 ` [PATCH 07/12] s390: add pte_free_defer(), with use of mmdrop_async() Hugh Dickins
2023-06-06 5:11 ` Hugh Dickins
2023-06-06 18:39 ` Jason Gunthorpe
2023-06-08 2:46 ` Hugh Dickins
2023-06-06 19:40 ` Gerald Schaefer
2023-06-08 3:35 ` Hugh Dickins
2023-06-08 13:58 ` Jason Gunthorpe
2023-06-08 15:47 ` Gerald Schaefer
2023-05-29 6:23 ` [PATCH 08/12] mm/pgtable: add pte_free_defer() for pgtable as page Hugh Dickins
2023-06-01 13:31 ` Jann Horn
[not found] ` <ZHekpAKJ05cr/GLl@ziepe.ca>
2023-06-02 6:03 ` Hugh Dickins
2023-06-02 12:15 ` Jason Gunthorpe
2023-05-29 6:25 ` [PATCH 09/12] mm/khugepaged: retract_page_tables() without mmap or vma lock Hugh Dickins
2023-05-29 23:26 ` Peter Xu
2023-05-31 0:38 ` Hugh Dickins
2023-05-31 15:34 ` Jann Horn
[not found] ` <ZHe0A079X9B8jWlH@x1n>
2023-05-31 22:18 ` Jann Horn
2023-06-01 14:06 ` Jason Gunthorpe
2023-06-06 6:18 ` Hugh Dickins
2023-05-29 6:26 ` [PATCH 10/12] mm/khugepaged: collapse_pte_mapped_thp() with mmap_read_lock() Hugh Dickins
2023-05-31 17:25 ` Jann Horn
2023-06-02 5:11 ` Hugh Dickins [this message]
2023-05-29 6:28 ` [PATCH 11/12] mm/khugepaged: delete khugepaged_collapse_pte_mapped_thps() Hugh Dickins
2023-05-29 6:30 ` [PATCH 12/12] mm: delete mmap_write_trylock() and vma_try_start_write() Hugh Dickins
[not found] ` <CAG48ez0pCqfRdVSnJz7EKtNvMR65=zJgVB-72nTdrNuhtJNX2Q@mail.gmail.com>
2023-06-02 4:37 ` [PATCH 00/12] mm: free retracted page table by RCU Hugh Dickins
2023-06-02 15:26 ` Jann Horn
2023-06-06 6:28 ` Hugh Dickins
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=dad171e1-cacf-e430-e91f-649ebeab605b@google.com \
--to=hughd@google.com \
--cc=agordeev@linux.ibm.com \
--cc=aneesh.kumar@linux.ibm.com \
--cc=apopple@nvidia.com \
--cc=axelrasmussen@google.com \
--cc=borntraeger@linux.ibm.com \
--cc=david@redhat.com \
--cc=hch@infradead.org \
--cc=imbrenda@linux.ibm.com \
--cc=ira.weiny@intel.com \
--cc=jannh@google.com \
--cc=jgg@ziepe.ca \
--cc=linmiaohe@huawei.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-s390@vger.kernel.org \
--cc=linux@armlinux.org.uk \
--cc=pasha.tatashin@solee \
--cc=peterx@redhat.com \
--cc=peterz@infradead.org \
--cc=rcampbell@nvidia.com \
--cc=shy828301@gmail.com \
--cc=song@kernel.org \
--cc=sparclinux@vger.kernel.org \
--cc=steven.price@arm.com \
--cc=thomas.hellstrom@linux.intel.com \
--cc=will@kernel.org \
--cc=willy@infradead.org \
--cc=yuzhao@google.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).