From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.ozlabs.org (lists.ozlabs.org [112.213.38.117]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 228E8C41535 for ; Tue, 19 Dec 2023 08:03:21 +0000 (UTC) Authentication-Results: lists.ozlabs.org; dkim=fail reason="signature verification failed" (1024-bit key; unprotected) header.d=redhat.com header.i=@redhat.com header.a=rsa-sha256 header.s=mimecast20190719 header.b=aMMGsaXZ; dkim=fail reason="signature verification failed" (1024-bit key) header.d=redhat.com header.i=@redhat.com header.a=rsa-sha256 header.s=mimecast20190719 header.b=aMMGsaXZ; dkim-atps=neutral Received: from boromir.ozlabs.org (localhost [IPv6:::1]) by lists.ozlabs.org (Postfix) with ESMTP id 4SvThl5vbvz3vsg for ; Tue, 19 Dec 2023 19:03:19 +1100 (AEDT) Authentication-Results: lists.ozlabs.org; dkim=pass (1024-bit key; unprotected) header.d=redhat.com header.i=@redhat.com header.a=rsa-sha256 header.s=mimecast20190719 header.b=aMMGsaXZ; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.a=rsa-sha256 header.s=mimecast20190719 header.b=aMMGsaXZ; dkim-atps=neutral Authentication-Results: lists.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=redhat.com (client-ip=170.10.133.124; helo=us-smtp-delivery-124.mimecast.com; envelope-from=peterx@redhat.com; receiver=lists.ozlabs.org) Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by lists.ozlabs.org (Postfix) with ESMTPS id 4SvTZ91tW8z3dHc for ; Tue, 19 Dec 2023 18:57:37 +1100 (AEDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1702972654; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=WpRYnsdxCsgwENogJmMAQBWBGi+Y3txosu8YBUBJ2nU=; b=aMMGsaXZViyHJrrJuZFCvO+XfHDfhVDFJaAkxQk+JlS2P0LS+o4YVsDOGmrUd0lwkUGo6h vS+rSmoGILe9W/BUTgM8RrPiXDfm77GPJrrVyIh7hMaZoNZ+tpZqSq5Ic3rGM2w87uqIQV rUie2mcq12v/rX9ugm/tj75Pdmm4TGw= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1702972654; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=WpRYnsdxCsgwENogJmMAQBWBGi+Y3txosu8YBUBJ2nU=; b=aMMGsaXZViyHJrrJuZFCvO+XfHDfhVDFJaAkxQk+JlS2P0LS+o4YVsDOGmrUd0lwkUGo6h vS+rSmoGILe9W/BUTgM8RrPiXDfm77GPJrrVyIh7hMaZoNZ+tpZqSq5Ic3rGM2w87uqIQV rUie2mcq12v/rX9ugm/tj75Pdmm4TGw= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-650-6W9nSAUnNQOFp9mej7E_hg-1; Tue, 19 Dec 2023 02:57:29 -0500 X-MC-Unique: 6W9nSAUnNQOFp9mej7E_hg-1 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com [10.11.54.4]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 1341F83B8E5; Tue, 19 Dec 2023 07:57:28 +0000 (UTC) Received: from x1n.redhat.com (unknown [10.72.116.117]) by smtp.corp.redhat.com (Postfix) with ESMTP id 672D22026D66; Tue, 19 Dec 2023 07:57:17 +0000 (UTC) From: peterx@redhat.com To: linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH 08/13] mm/gup: Handle hugetlb for no_page_table() Date: Tue, 19 Dec 2023 15:55:33 +0800 Message-ID: <20231219075538.414708-9-peterx@redhat.com> In-Reply-To: <20231219075538.414708-1-peterx@redhat.com> References: <20231219075538.414708-1-peterx@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.4 X-BeenThere: linuxppc-dev@lists.ozlabs.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Linux on PowerPC Developers Mail List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: James Houghton , David Hildenbrand , Yang Shi , peterx@redhat.com, Andrew Jones , Matthew Wilcox , linux-riscv@lists.infradead.org, Andrea Arcangeli , Christoph Hellwig , "Aneesh Kumar K . V" , Vlastimil Babka , Jason Gunthorpe , Axel Rasmussen , Rik van Riel , John Hubbard , "Kirill A . Shutemov" , linux-arm-kernel@lists.infradead.org, Lorenzo Stoakes , Andrew Morton , linuxppc-dev@lists.ozlabs.org, Mike Rapoport , Mike Kravetz Errors-To: linuxppc-dev-bounces+linuxppc-dev=archiver.kernel.org@lists.ozlabs.org Sender: "Linuxppc-dev" From: Peter Xu no_page_table() is not yet used for hugetlb code paths. Make it prepared. The major difference here is hugetlb will return -EFAULT as long as page cache does not exist, even if VM_SHARED. See hugetlb_follow_page_mask(). Pass "address" into no_page_table() too, as hugetlb will need it. Reviewed-by: Christoph Hellwig Signed-off-by: Peter Xu --- mm/gup.c | 44 ++++++++++++++++++++++++++------------------ 1 file changed, 26 insertions(+), 18 deletions(-) diff --git a/mm/gup.c b/mm/gup.c index 82d28d517d0d..6c0d82fa8cc7 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -501,19 +501,27 @@ static inline void mm_set_has_pinned_flag(unsigned long *mm_flags) #ifdef CONFIG_MMU static struct page *no_page_table(struct vm_area_struct *vma, - unsigned int flags) + unsigned int flags, unsigned long address) { + if (!(flags & FOLL_DUMP)) + return NULL; + /* - * When core dumping an enormous anonymous area that nobody - * has touched so far, we don't want to allocate unnecessary pages or + * When core dumping, we don't want to allocate unnecessary pages or * page tables. Return error instead of NULL to skip handle_mm_fault, * then get_dump_page() will return NULL to leave a hole in the dump. * But we can only make this optimization where a hole would surely * be zero-filled if handle_mm_fault() actually did handle it. */ - if ((flags & FOLL_DUMP) && - (vma_is_anonymous(vma) || !vma->vm_ops->fault)) + if (is_vm_hugetlb_page(vma)) { + struct hstate *h = hstate_vma(vma); + + if (!hugetlbfs_pagecache_present(h, vma, address)) + return ERR_PTR(-EFAULT); + } else if ((vma_is_anonymous(vma) || !vma->vm_ops->fault)) { return ERR_PTR(-EFAULT); + } + return NULL; } @@ -593,7 +601,7 @@ static struct page *follow_page_pte(struct vm_area_struct *vma, ptep = pte_offset_map_lock(mm, pmd, address, &ptl); if (!ptep) - return no_page_table(vma, flags); + return no_page_table(vma, flags, address); pte = ptep_get(ptep); if (!pte_present(pte)) goto no_page; @@ -685,7 +693,7 @@ static struct page *follow_page_pte(struct vm_area_struct *vma, pte_unmap_unlock(ptep, ptl); if (!pte_none(pte)) return NULL; - return no_page_table(vma, flags); + return no_page_table(vma, flags, address); } static struct page *follow_pmd_mask(struct vm_area_struct *vma, @@ -701,27 +709,27 @@ static struct page *follow_pmd_mask(struct vm_area_struct *vma, pmd = pmd_offset(pudp, address); pmdval = pmdp_get_lockless(pmd); if (pmd_none(pmdval)) - return no_page_table(vma, flags); + return no_page_table(vma, flags, address); if (!pmd_present(pmdval)) - return no_page_table(vma, flags); + return no_page_table(vma, flags, address); if (pmd_devmap(pmdval)) { ptl = pmd_lock(mm, pmd); page = follow_devmap_pmd(vma, address, pmd, flags, &ctx->pgmap); spin_unlock(ptl); if (page) return page; - return no_page_table(vma, flags); + return no_page_table(vma, flags, address); } if (likely(!pmd_trans_huge(pmdval))) return follow_page_pte(vma, address, pmd, flags, &ctx->pgmap); if (pmd_protnone(pmdval) && !gup_can_follow_protnone(vma, flags)) - return no_page_table(vma, flags); + return no_page_table(vma, flags, address); ptl = pmd_lock(mm, pmd); if (unlikely(!pmd_present(*pmd))) { spin_unlock(ptl); - return no_page_table(vma, flags); + return no_page_table(vma, flags, address); } if (unlikely(!pmd_trans_huge(*pmd))) { spin_unlock(ptl); @@ -752,17 +760,17 @@ static struct page *follow_pud_mask(struct vm_area_struct *vma, pud = pud_offset(p4dp, address); if (pud_none(*pud)) - return no_page_table(vma, flags); + return no_page_table(vma, flags, address); if (pud_devmap(*pud)) { ptl = pud_lock(mm, pud); page = follow_devmap_pud(vma, address, pud, flags, &ctx->pgmap); spin_unlock(ptl); if (page) return page; - return no_page_table(vma, flags); + return no_page_table(vma, flags, address); } if (unlikely(pud_bad(*pud))) - return no_page_table(vma, flags); + return no_page_table(vma, flags, address); return follow_pmd_mask(vma, address, pud, flags, ctx); } @@ -776,10 +784,10 @@ static struct page *follow_p4d_mask(struct vm_area_struct *vma, p4d = p4d_offset(pgdp, address); if (p4d_none(*p4d)) - return no_page_table(vma, flags); + return no_page_table(vma, flags, address); BUILD_BUG_ON(p4d_huge(*p4d)); if (unlikely(p4d_bad(*p4d))) - return no_page_table(vma, flags); + return no_page_table(vma, flags, address); return follow_pud_mask(vma, address, p4d, flags, ctx); } @@ -829,7 +837,7 @@ static struct page *follow_page_mask(struct vm_area_struct *vma, pgd = pgd_offset(mm, address); if (pgd_none(*pgd) || unlikely(pgd_bad(*pgd))) - return no_page_table(vma, flags); + return no_page_table(vma, flags, address); return follow_p4d_mask(vma, address, pgd, flags, ctx); } -- 2.41.0