From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5C562C433DB for ; Tue, 9 Feb 2021 03:02:47 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id ED31C64E9D for ; Tue, 9 Feb 2021 03:02:46 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org ED31C64E9D Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 9625A6B0072; Mon, 8 Feb 2021 22:02:45 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 8E78F8D0003; Mon, 8 Feb 2021 22:02:45 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7AD9B8D0002; Mon, 8 Feb 2021 22:02:45 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0110.hostedemail.com [216.40.44.110]) by kanga.kvack.org (Postfix) with ESMTP id 615916B0072 for ; Mon, 8 Feb 2021 22:02:45 -0500 (EST) Received: from smtpin16.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 2F4C9B7BD for ; Tue, 9 Feb 2021 03:02:45 +0000 (UTC) X-FDA: 77797231890.16.wrist03_1207dd027603 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin16.hostedemail.com (Postfix) with ESMTP id 0F2A7100E6917 for ; Tue, 9 Feb 2021 03:02:45 +0000 (UTC) X-HE-Tag: wrist03_1207dd027603 X-Filterd-Recvd-Size: 8692 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [63.128.21.124]) by imf01.hostedemail.com (Postfix) with ESMTP for ; Tue, 9 Feb 2021 03:02:44 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1612839763; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=m/Cml9fJfOIKMiH17sYbLZNb5fx7uhgkIw+TMGyZimw=; b=HQAOsGPOGMS+wR5L5qHy1FIkDOSgC+fwGoYyxK/AdBJqyX9lf1sYOcM6sCcddBO4XoBo64 839u/Yhl9ufbsLcWxaDdwu4vA8jnmzBjgGLl1vPxwWHJ8T1eTWagAkEFjHsp5DO6SOs/MK 9F0xkznNqywlx3Hy6vpNAc4pe4Fnptg= Received: from mail-qv1-f69.google.com (mail-qv1-f69.google.com [209.85.219.69]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-572-a5lZogprNT-54MGS4B3CDQ-1; Mon, 08 Feb 2021 22:02:42 -0500 X-MC-Unique: a5lZogprNT-54MGS4B3CDQ-1 Received: by mail-qv1-f69.google.com with SMTP id m1so12216489qvp.0 for ; Mon, 08 Feb 2021 19:02:42 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=m/Cml9fJfOIKMiH17sYbLZNb5fx7uhgkIw+TMGyZimw=; b=onBg/SI4prjJXnCh8mNMnOwhAz5vhLdLcXPxdYF/GV3w1wDeu157CVgx0p8KVqDvyH Lh/ELqPWUiMDfN1Nu+GB/zRXp53jX78tTE10RuwPotvDsQ504KeyEtnDy37V7B6TmQVR +LFecgEjSi6FQznBvbmOQBw+9d2p/BTCTkbYmX+ao/6jQzgqM2G1mj5SCHcb1pxs77XY mBVwNQKXKI0BuTJBy5ciY9wta358XOXZPMsoZ4X7WcQkcwTJdriLQ98TG+WScU48GmsU nfqCGqL7AjMtgB2IagSF2SB86hgitIQN3hxmdV0mklt8bkVLDuqIQNU7CS9+ZZyicbGu UZIA== X-Gm-Message-State: AOAM532j25IgqWhClHBIpXYp6gXj/IvOUicdJ7vSH3RpWTkgT6FFkolQ wn5HMW5a2++becNzc407qyGIRKI4jWn1t3NxkmNIvzNzZYCVmD+hqNxCldPIBk8xx786c4rlS34 PGjvEcZuO9DHS67/R5GpyTaZnStqo1gIBhvGOlBAXiiQhJNl1TVQaRAd/tCyJ X-Received: by 2002:a0c:b7a3:: with SMTP id l35mr5881573qve.46.1612839761841; Mon, 08 Feb 2021 19:02:41 -0800 (PST) X-Google-Smtp-Source: ABdhPJxIOEctrkm3l5R6SZkaqR8QWfLpvxLeWV7SKUDmzr17+0bv3pMiSOaRSzsWGdMpQzUXQOlGNA== X-Received: by 2002:a0c:b7a3:: with SMTP id l35mr5881537qve.46.1612839761506; Mon, 08 Feb 2021 19:02:41 -0800 (PST) Received: from xz-x1.redhat.com (bras-vprn-toroon474qw-lp130-20-174-93-89-182.dsl.bell.ca. [174.93.89.182]) by smtp.gmail.com with ESMTPSA id z20sm17078830qki.93.2021.02.08.19.02.39 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 08 Feb 2021 19:02:40 -0800 (PST) From: Peter Xu To: linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: Matthew Wilcox , Linus Torvalds , Kirill Tkhai , Mike Rapoport , David Gibson , Kirill Shutemov , Christoph Hellwig , Miaohe Lin , Gal Pressman , Jason Gunthorpe , Jann Horn , peterx@redhat.com, Jan Kara , Wei Zhang , Mike Kravetz , Andrea Arcangeli , Andrew Morton Subject: [PATCH v4 5/5] hugetlb: Do early cow when page pinned on src mm Date: Mon, 8 Feb 2021 22:02:29 -0500 Message-Id: <20210209030229.84991-6-peterx@redhat.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20210209030229.84991-1-peterx@redhat.com> References: <20210209030229.84991-1-peterx@redhat.com> MIME-Version: 1.0 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=peterx@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Type: text/plain; charset="US-ASCII" Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This is the last missing piece of the COW-during-fork effort when there'r= e pinned pages found. One can reference 70e806e4e645 ("mm: Do early cow fo= r pinned pages during fork() for ptes", 2020-09-27) for more information, s= ince we do similar things here rather than pte this time, but just for hugetlb= . Note that after Jason's recent work on 57efa1fe5957 ("mm/gup: prevent gup= _fast from racing with COW during fork", 2020-12-15) which is safer and easier = to understand, we're safe now within the whole copy_page_range() against gup= -fast, we don't need the wr-protect trick that proposed in 70e806e4e645 anymore. Reviewed-by: Mike Kravetz Signed-off-by: Peter Xu --- mm/hugetlb.c | 66 ++++++++++++++++++++++++++++++++++++++++++++++++---- 1 file changed, 62 insertions(+), 4 deletions(-) diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 620700f05ff4..7c1a0ecc130e 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -3727,6 +3727,18 @@ static bool is_hugetlb_entry_hwpoisoned(pte_t pte) return false; } =20 +static void +hugetlb_install_page(struct vm_area_struct *vma, pte_t *ptep, unsigned l= ong addr, + struct page *new_page) +{ + __SetPageUptodate(new_page); + set_huge_pte_at(vma->vm_mm, addr, ptep, make_huge_pte(vma, new_page, 1)= ); + hugepage_add_new_anon_rmap(new_page, vma, addr); + hugetlb_count_add(pages_per_huge_page(hstate_vma(vma)), vma->vm_mm); + ClearHPageRestoreReserve(new_page); + SetHPageMigratable(new_page); +} + int copy_hugetlb_page_range(struct mm_struct *dst, struct mm_struct *src= , struct vm_area_struct *vma) { @@ -3736,6 +3748,7 @@ int copy_hugetlb_page_range(struct mm_struct *dst, = struct mm_struct *src, int cow =3D is_cow_mapping(vma->vm_flags); struct hstate *h =3D hstate_vma(vma); unsigned long sz =3D huge_page_size(h); + unsigned long npages =3D pages_per_huge_page(h); struct address_space *mapping =3D vma->vm_file->f_mapping; struct mmu_notifier_range range; int ret =3D 0; @@ -3784,6 +3797,7 @@ int copy_hugetlb_page_range(struct mm_struct *dst, = struct mm_struct *src, spin_lock_nested(src_ptl, SINGLE_DEPTH_NESTING); entry =3D huge_ptep_get(src_pte); dst_entry =3D huge_ptep_get(dst_pte); +again: if (huge_pte_none(entry) || !huge_pte_none(dst_entry)) { /* * Skip if src entry none. Also, skip in the @@ -3807,6 +3821,52 @@ int copy_hugetlb_page_range(struct mm_struct *dst,= struct mm_struct *src, } set_huge_swap_pte_at(dst, addr, dst_pte, entry, sz); } else { + entry =3D huge_ptep_get(src_pte); + ptepage =3D pte_page(entry); + get_page(ptepage); + + /* + * This is a rare case where we see pinned hugetlb + * pages while they're prone to COW. We need to do the + * COW earlier during fork. + * + * When pre-allocating the page or copying data, we + * need to be without the pgtable locks since we could + * sleep during the process. + */ + if (unlikely(page_needs_cow_for_dma(vma, ptepage))) { + pte_t src_pte_old =3D entry; + struct page *new; + + spin_unlock(src_ptl); + spin_unlock(dst_ptl); + /* Do not use reserve as it's private owned */ + new =3D alloc_huge_page(vma, addr, 1); + if (IS_ERR(new)) { + put_page(ptepage); + ret =3D PTR_ERR(new); + break; + } + copy_user_huge_page(new, ptepage, addr, vma, + npages); + put_page(ptepage); + + /* Install the new huge page if src pte stable */ + dst_ptl =3D huge_pte_lock(h, dst, dst_pte); + src_ptl =3D huge_pte_lockptr(h, src, src_pte); + spin_lock_nested(src_ptl, SINGLE_DEPTH_NESTING); + entry =3D huge_ptep_get(src_pte); + if (!pte_same(src_pte_old, entry)) { + put_page(new); + /* dst_entry won't change as in child */ + goto again; + } + hugetlb_install_page(vma, dst_pte, addr, new); + spin_unlock(src_ptl); + spin_unlock(dst_ptl); + continue; + } + if (cow) { /* * No need to notify as we are downgrading page @@ -3817,12 +3877,10 @@ int copy_hugetlb_page_range(struct mm_struct *dst= , struct mm_struct *src, */ huge_ptep_set_wrprotect(src, addr, src_pte); } - entry =3D huge_ptep_get(src_pte); - ptepage =3D pte_page(entry); - get_page(ptepage); + page_dup_rmap(ptepage, true); set_huge_pte_at(dst, addr, dst_pte, entry); - hugetlb_count_add(pages_per_huge_page(h), dst); + hugetlb_count_add(npages, dst); } spin_unlock(src_ptl); spin_unlock(dst_ptl); --=20 2.26.2