From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0B2D3C4332F for ; Wed, 14 Dec 2022 02:06:36 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236726AbiLNCGf (ORCPT ); Tue, 13 Dec 2022 21:06:35 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60886 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236039AbiLNCGe (ORCPT ); Tue, 13 Dec 2022 21:06:34 -0500 Received: from smtp-fw-33001.amazon.com (smtp-fw-33001.amazon.com [207.171.190.10]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C6EB02229E for ; Tue, 13 Dec 2022 18:06:32 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amazon.com; i=@amazon.com; q=dns/txt; s=amazon201209; t=1670983593; x=1702519593; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=UXkJiqqx9nhbxTW4U3Q9/yT+oaU3Ff2K5U+QEdK8g5g=; b=GUl4WGFLB8Zn3gk/Zn0b7xXD9yZtAFFZHP9WwiLJFUTPt/mfOQ/S5CzJ I4D3EXP3FN2w26kS/NNxBPoebMl0Uhdf1zvReNj/ccamrlAPk6D7QoNMs IBPdp+k4di1CQ0FXDJwbvKLQ15f+kDXEm8DzjdUtO10Mq07Yn2PvKKQMH c=; X-IronPort-AV: E=Sophos;i="5.96,243,1665446400"; d="scan'208";a="248125540" Received: from iad12-co-svc-p1-lb1-vlan3.amazon.com (HELO email-inbound-relay-pdx-2b-m6i4x-189d700f.us-west-2.amazon.com) ([10.43.8.6]) by smtp-border-fw-33001.sea14.amazon.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 14 Dec 2022 02:06:31 +0000 Received: from EX13MTAUWB001.ant.amazon.com (pdx1-ws-svc-p6-lb9-vlan3.pdx.amazon.com [10.236.137.198]) by email-inbound-relay-pdx-2b-m6i4x-189d700f.us-west-2.amazon.com (Postfix) with ESMTPS id F17F441714; Wed, 14 Dec 2022 02:06:28 +0000 (UTC) Received: from EX19D001UWA002.ant.amazon.com (10.13.138.236) by EX13MTAUWB001.ant.amazon.com (10.43.161.207) with Microsoft SMTP Server (TLS) id 15.0.1497.42; Wed, 14 Dec 2022 02:06:28 +0000 Received: from u46989501580c5c.ant.amazon.com (10.43.160.83) by EX19D001UWA002.ant.amazon.com (10.13.138.236) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA) id 15.2.1118.20; Wed, 14 Dec 2022 02:06:26 +0000 From: Samuel Mendoza-Jonas To: CC: , Baolin Wang , "Mike Kravetz" , David Hildenbrand , Muchun Song , Andrew Morton , Sasha Levin , "Samuel Mendoza-Jonas" Subject: [PATCH 5.4 v2] mm/hugetlb: fix races when looking up a CONT-PTE/PMD size hugetlb page Date: Tue, 13 Dec 2022 18:06:09 -0800 Message-ID: <20221214020609.832545-1-samjonas@amazon.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain X-Originating-IP: [10.43.160.83] X-ClientProxiedBy: EX13D21UWA004.ant.amazon.com (10.43.160.252) To EX19D001UWA002.ant.amazon.com (10.13.138.236) Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Baolin Wang [ Upstream commit fac35ba763ed07ba93154c95ffc0c4a55023707f ] On some architectures (like ARM64), it can support CONT-PTE/PMD size hugetlb, which means it can support not only PMD/PUD size hugetlb (2M and 1G), but also CONT-PTE/PMD size(64K and 32M) if a 4K page size specified. So when looking up a CONT-PTE size hugetlb page by follow_page(), it will use pte_offset_map_lock() to get the pte entry lock for the CONT-PTE size hugetlb in follow_page_pte(). However this pte entry lock is incorrect for the CONT-PTE size hugetlb, since we should use huge_pte_lock() to get the correct lock, which is mm->page_table_lock. That means the pte entry of the CONT-PTE size hugetlb under current pte lock is unstable in follow_page_pte(), we can continue to migrate or poison the pte entry of the CONT-PTE size hugetlb, which can cause some potential race issues, even though they are under the 'pte lock'. For example, suppose thread A is trying to look up a CONT-PTE size hugetlb page by move_pages() syscall under the lock, however antoher thread B can migrate the CONT-PTE hugetlb page at the same time, which will cause thread A to get an incorrect page, if thread A also wants to do page migration, then data inconsistency error occurs. Moreover we have the same issue for CONT-PMD size hugetlb in follow_huge_pmd(). To fix above issues, rename the follow_huge_pmd() as follow_huge_pmd_pte() to handle PMD and PTE level size hugetlb, which uses huge_pte_lock() to get the correct pte entry lock to make the pte entry stable. Mike said: Support for CONT_PMD/_PTE was added with bb9dd3df8ee9 ("arm64: hugetlb: refactor find_num_contig()"). Patch series "Support for contiguous pte hugepages", v4. However, I do not believe these code paths were executed until migration support was added with 5480280d3f2d ("arm64/mm: enable HugeTLB migration for contiguous bit HugeTLB pages") I would go with 5480280d3f2d for the Fixes: targe. Link: https://lkml.kernel.org/r/635f43bdd85ac2615a58405da82b4d33c6e5eb05.1662017562.git.baolin.wang@linux.alibaba.com Fixes: 5480280d3f2d ("arm64/mm: enable HugeTLB migration for contiguous bit HugeTLB pages") Signed-off-by: Baolin Wang Suggested-by: Mike Kravetz Reviewed-by: Mike Kravetz Cc: David Hildenbrand Cc: Muchun Song Cc: Signed-off-by: Andrew Morton Signed-off-by: Sasha Levin [5.4: Fixup contextual diffs before pin_user_pages()] Signed-off-by: Samuel Mendoza-Jonas --- v2: Fix up stray out label that is now unused. include/linux/hugetlb.h | 6 +++--- mm/gup.c | 13 ++++++++++++- mm/hugetlb.c | 30 +++++++++++++++--------------- 3 files changed, 30 insertions(+), 19 deletions(-) diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h index cef70d6e1657..c7507135c2a0 100644 --- a/include/linux/hugetlb.h +++ b/include/linux/hugetlb.h @@ -127,8 +127,8 @@ struct page *follow_huge_addr(struct mm_struct *mm, unsigned long address, struct page *follow_huge_pd(struct vm_area_struct *vma, unsigned long address, hugepd_t hpd, int flags, int pdshift); -struct page *follow_huge_pmd(struct mm_struct *mm, unsigned long address, - pmd_t *pmd, int flags); +struct page *follow_huge_pmd_pte(struct vm_area_struct *vma, unsigned long address, + int flags); struct page *follow_huge_pud(struct mm_struct *mm, unsigned long address, pud_t *pud, int flags); struct page *follow_huge_pgd(struct mm_struct *mm, unsigned long address, @@ -175,7 +175,7 @@ static inline void hugetlb_show_meminfo(void) { } #define follow_huge_pd(vma, addr, hpd, flags, pdshift) NULL -#define follow_huge_pmd(mm, addr, pmd, flags) NULL +#define follow_huge_pmd_pte(vma, addr, flags) NULL #define follow_huge_pud(mm, addr, pud, flags) NULL #define follow_huge_pgd(mm, addr, pgd, flags) NULL #define prepare_hugepage_range(file, addr, len) (-EINVAL) diff --git a/mm/gup.c b/mm/gup.c index 3ef769529548..98bd5e0a51c8 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -188,6 +188,17 @@ static struct page *follow_page_pte(struct vm_area_struct *vma, spinlock_t *ptl; pte_t *ptep, pte; + /* + * Considering PTE level hugetlb, like continuous-PTE hugetlb on + * ARM64 architecture. + */ + if (is_vm_hugetlb_page(vma)) { + page = follow_huge_pmd_pte(vma, address, flags); + if (page) + return page; + return no_page_table(vma, flags); + } + retry: if (unlikely(pmd_bad(*pmd))) return no_page_table(vma, flags); @@ -333,7 +344,7 @@ static struct page *follow_pmd_mask(struct vm_area_struct *vma, if (pmd_none(pmdval)) return no_page_table(vma, flags); if (pmd_huge(pmdval) && vma->vm_flags & VM_HUGETLB) { - page = follow_huge_pmd(mm, address, pmd, flags); + page = follow_huge_pmd_pte(vma, address, flags); if (page) return page; return no_page_table(vma, flags); diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 2126d78f053d..e4478b62d0f7 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -5157,30 +5157,30 @@ follow_huge_pd(struct vm_area_struct *vma, } struct page * __weak -follow_huge_pmd(struct mm_struct *mm, unsigned long address, - pmd_t *pmd, int flags) +follow_huge_pmd_pte(struct vm_area_struct *vma, unsigned long address, int flags) { + struct hstate *h = hstate_vma(vma); + struct mm_struct *mm = vma->vm_mm; struct page *page = NULL; spinlock_t *ptl; - pte_t pte; + pte_t *ptep, pte; + retry: - ptl = pmd_lockptr(mm, pmd); - spin_lock(ptl); - /* - * make sure that the address range covered by this pmd is not - * unmapped from other threads. - */ - if (!pmd_huge(*pmd)) - goto out; - pte = huge_ptep_get((pte_t *)pmd); + ptep = huge_pte_offset(mm, address, huge_page_size(h)); + if (!ptep) + return NULL; + + ptl = huge_pte_lock(h, mm, ptep); + pte = huge_ptep_get(ptep); if (pte_present(pte)) { - page = pmd_page(*pmd) + ((address & ~PMD_MASK) >> PAGE_SHIFT); + page = pte_page(pte) + + ((address & ~huge_page_mask(h)) >> PAGE_SHIFT); if (flags & FOLL_GET) get_page(page); } else { if (is_hugetlb_entry_migration(pte)) { spin_unlock(ptl); - __migration_entry_wait(mm, (pte_t *)pmd, ptl); + __migration_entry_wait(mm, ptep, ptl); goto retry; } /* @@ -5188,7 +5188,7 @@ follow_huge_pmd(struct mm_struct *mm, unsigned long address, * follow_page_mask(). */ } -out: + spin_unlock(ptl); return page; } -- 2.25.1