From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.5 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 565C3C4363D for ; Fri, 2 Oct 2020 19:26:54 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id B829120719 for ; Fri, 2 Oct 2020 19:26:53 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="iafS71Cz" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org B829120719 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 048536B0062; Fri, 2 Oct 2020 15:26:53 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id F3AD4900002; Fri, 2 Oct 2020 15:26:52 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E02026B006C; Fri, 2 Oct 2020 15:26:52 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0063.hostedemail.com [216.40.44.63]) by kanga.kvack.org (Postfix) with ESMTP id AFA7B6B0062 for ; Fri, 2 Oct 2020 15:26:52 -0400 (EDT) Received: from smtpin08.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 50FE5181AE873 for ; Fri, 2 Oct 2020 19:26:52 +0000 (UTC) X-FDA: 77327967864.08.goose05_3d13c25271a6 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin08.hostedemail.com (Postfix) with ESMTP id 309A61819E772 for ; Fri, 2 Oct 2020 19:26:52 +0000 (UTC) X-HE-Tag: goose05_3d13c25271a6 X-Filterd-Recvd-Size: 16875 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [63.128.21.124]) by imf40.hostedemail.com (Postfix) with ESMTP for ; Fri, 2 Oct 2020 19:26:51 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1601666811; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding; bh=6cF3SgzuvOIRZj3w94/msXN6CI56hZ4X2dxSFR9vd+k=; b=iafS71CzaGMf/rEkg+eTWq3xcLERltpO1N64tNONYvEdm8gCJlmIT2FBSyKmWKJDMYt5UI IzM+fzAZ6W78sK8qqmcYgMIKgNc2b6xIyPb7fxDOvaW8+1JtQncxguEytrerA+T/+4SEQf VCy2T38dA/UFMQ1HN/EuQTGEgyADhcc= Received: from mail-qk1-f198.google.com (mail-qk1-f198.google.com [209.85.222.198]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-202-ABDI2KBKO4aIttllCeDVRg-1; Fri, 02 Oct 2020 15:26:49 -0400 X-MC-Unique: ABDI2KBKO4aIttllCeDVRg-1 Received: by mail-qk1-f198.google.com with SMTP id u23so1771516qku.17 for ; Fri, 02 Oct 2020 12:26:49 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version :content-transfer-encoding; bh=6cF3SgzuvOIRZj3w94/msXN6CI56hZ4X2dxSFR9vd+k=; b=H8b+SWcYAxxenWhsW57HYKEHf6RWo3WaRdyqM2fS2q43EZ7VFy+3OUpgdTNIbSEN9D om+JS4SesX2Pohc0qCzz7HsBOBEtkSK0tFS9PWg6WMeC3Ip+IdKhpNyK5pGmgqBs68if jXcqHfp63OsUmQT1LGHkCLfX5uFnSYQVu75Jb3EXyAc+NXIh8ioFphHyqQYKl0T/VIDa m/vRjLLdoAUDlfsyCfAwR0lzLdWHyQD6HCxY2DG4whFROeXXe+wMKt2+SpjWK5y+EJuZ +gwxLsRubjVmoAJuU83tIMtZgT0M7LXJkJG/CDyd0Mo8hL5Mfc3Pnc4bopgE3N3uI4ku NTOw== X-Gm-Message-State: AOAM533r5L4gXvszwsvJLyJGmzyTvW8+ssKZHwXrP+QznOTr76M0+Qfg /BUpM7NLpM0BObrJ4MaigA78FiMYCmT9nP9xXkrvHiQg5/1vkzJ+Th8PQS6NvmVzKs205AyJdg9 jSy15nq8rKPI= X-Received: by 2002:ac8:bc9:: with SMTP id p9mr3993659qti.50.1601666808318; Fri, 02 Oct 2020 12:26:48 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxlLlKbJozevapkbP1NSaJIub/Tra+6blf+F5KS20vS0EjxdmqJiwzSPiJYMOy9iovbujil3w== X-Received: by 2002:ac8:bc9:: with SMTP id p9mr3993626qti.50.1601666807905; Fri, 02 Oct 2020 12:26:47 -0700 (PDT) Received: from localhost.localdomain (toroon474qw-lp130-09-184-147-14-204.dsl.bell.ca. [184.147.14.204]) by smtp.gmail.com with ESMTPSA id t140sm1704842qke.125.2020.10.02.12.26.46 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 02 Oct 2020 12:26:47 -0700 (PDT) From: Peter Xu To: linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: Jason Gunthorpe , peterx@redhat.com, "Kirill A . Shutemov" , Andrew Morton Subject: [PATCH v2] mm: Remove src/dst mm parameter in copy_page_range() Date: Fri, 2 Oct 2020 15:26:47 -0400 Message-Id: <20201002192647.7161-1-peterx@redhat.com> X-Mailer: git-send-email 2.26.2 MIME-Version: 1.0 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=peterx@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Type: text/plain; charset="US-ASCII" Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Both of the mm pointers are not needed after commit 7a4830c380f3 ("mm/for= k: Pass new vma pointer into copy_page_range()"). Jason Gunthorpe also reported that the ordering of copy_page_range() is o= dd. Since working at it, reorder the parameters to be logical, by (1) always = put the dst_* fields to be before src_* fields, and (2) keep the same type of parameters together. CC: Jason Gunthorpe CC: Andrew Morton Reported-by: Kirill A. Shutemov Signed-off-by: Peter Xu --- v2: - further reorder some parameters and line format [Jason] --- include/linux/mm.h | 4 +- kernel/fork.c | 2 +- mm/memory.c | 139 ++++++++++++++++++++++++--------------------- 3 files changed, 76 insertions(+), 69 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index 16b799a0522c..a26e9f706b25 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1645,8 +1645,8 @@ struct mmu_notifier_range; =20 void free_pgd_range(struct mmu_gather *tlb, unsigned long addr, unsigned long end, unsigned long floor, unsigned long ceiling); -int copy_page_range(struct mm_struct *dst, struct mm_struct *src, - struct vm_area_struct *vma, struct vm_area_struct *new); +int +copy_page_range(struct vm_area_struct *dst_vma, struct vm_area_struct *s= rc_vma); int follow_pte_pmd(struct mm_struct *mm, unsigned long address, struct mmu_notifier_range *range, pte_t **ptepp, pmd_t **pmdpp, spinlock_t **ptlp); diff --git a/kernel/fork.c b/kernel/fork.c index da8d360fb032..a7671f5cb3e1 100644 --- a/kernel/fork.c +++ b/kernel/fork.c @@ -589,7 +589,7 @@ static __latent_entropy int dup_mmap(struct mm_struct= *mm, =20 mm->map_count++; if (!(tmp->vm_flags & VM_WIPEONFORK)) - retval =3D copy_page_range(mm, oldmm, mpnt, tmp); + retval =3D copy_page_range(tmp, mpnt); =20 if (tmp->vm_ops && tmp->vm_ops->open) tmp->vm_ops->open(tmp); diff --git a/mm/memory.c b/mm/memory.c index fcfc4ca36eba..8ade87e8600a 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -794,15 +794,15 @@ copy_nonpresent_pte(struct mm_struct *dst_mm, struc= t mm_struct *src_mm, * lock. */ static inline int -copy_present_page(struct mm_struct *dst_mm, struct mm_struct *src_mm, - pte_t *dst_pte, pte_t *src_pte, - struct vm_area_struct *vma, struct vm_area_struct *new, - unsigned long addr, int *rss, struct page **prealloc, - pte_t pte, struct page *page) +copy_present_page(struct vm_area_struct *dst_vma, struct vm_area_struct = *src_vma, + pte_t *dst_pte, pte_t *src_pte, unsigned long addr, int *rss, + struct page **prealloc, pte_t pte, struct page *page) { + struct mm_struct *dst_mm =3D dst_vma->vm_mm; + struct mm_struct *src_mm =3D src_vma->vm_mm; struct page *new_page; =20 - if (!is_cow_mapping(vma->vm_flags)) + if (!is_cow_mapping(src_vma->vm_flags)) return 1; =20 /* @@ -865,15 +865,15 @@ copy_present_page(struct mm_struct *dst_mm, struct = mm_struct *src_mm, * over and copy the page & arm it. */ *prealloc =3D NULL; - copy_user_highpage(new_page, page, addr, vma); + copy_user_highpage(new_page, page, addr, src_vma); __SetPageUptodate(new_page); - page_add_new_anon_rmap(new_page, new, addr, false); - lru_cache_add_inactive_or_unevictable(new_page, new); + page_add_new_anon_rmap(new_page, dst_vma, addr, false); + lru_cache_add_inactive_or_unevictable(new_page, dst_vma); rss[mm_counter(new_page)]++; =20 /* All done, just insert the new page copy in the child */ - pte =3D mk_pte(new_page, new->vm_page_prot); - pte =3D maybe_mkwrite(pte_mkdirty(pte), new); + pte =3D mk_pte(new_page, dst_vma->vm_page_prot); + pte =3D maybe_mkwrite(pte_mkdirty(pte), dst_vma); set_pte_at(dst_mm, addr, dst_pte, pte); return 0; } @@ -883,24 +883,22 @@ copy_present_page(struct mm_struct *dst_mm, struct = mm_struct *src_mm, * is required to copy this pte. */ static inline int -copy_present_pte(struct mm_struct *dst_mm, struct mm_struct *src_mm, - pte_t *dst_pte, pte_t *src_pte, struct vm_area_struct *vma, - struct vm_area_struct *new, - unsigned long addr, int *rss, struct page **prealloc) +copy_present_pte(struct vm_area_struct *dst_vma, struct vm_area_struct *= src_vma, + pte_t *dst_pte, pte_t *src_pte, unsigned long addr, int *rss, + struct page **prealloc) { - unsigned long vm_flags =3D vma->vm_flags; + struct mm_struct *dst_mm =3D dst_vma->vm_mm; + struct mm_struct *src_mm =3D src_vma->vm_mm; + unsigned long vm_flags =3D src_vma->vm_flags; pte_t pte =3D *src_pte; struct page *page; =20 - page =3D vm_normal_page(vma, addr, pte); + page =3D vm_normal_page(src_vma, addr, pte); if (page) { int retval; =20 - retval =3D copy_present_page(dst_mm, src_mm, - dst_pte, src_pte, - vma, new, - addr, rss, prealloc, - pte, page); + retval =3D copy_present_page(dst_vma, src_vma, dst_pte, src_pte, + addr, rss, prealloc, pte, page); if (retval <=3D 0) return retval; =20 @@ -957,11 +955,13 @@ page_copy_prealloc(struct mm_struct *src_mm, struct= vm_area_struct *vma, return new_page; } =20 -static int copy_pte_range(struct mm_struct *dst_mm, struct mm_struct *sr= c_mm, - pmd_t *dst_pmd, pmd_t *src_pmd, struct vm_area_struct *vma, - struct vm_area_struct *new, - unsigned long addr, unsigned long end) +static int +copy_pte_range(struct vm_area_struct *dst_vma, struct vm_area_struct *sr= c_vma, + pmd_t *dst_pmd, pmd_t *src_pmd, unsigned long addr, + unsigned long end) { + struct mm_struct *dst_mm =3D dst_vma->vm_mm; + struct mm_struct *src_mm =3D src_vma->vm_mm; pte_t *orig_src_pte, *orig_dst_pte; pte_t *src_pte, *dst_pte; spinlock_t *src_ptl, *dst_ptl; @@ -1004,15 +1004,15 @@ static int copy_pte_range(struct mm_struct *dst_m= m, struct mm_struct *src_mm, if (unlikely(!pte_present(*src_pte))) { entry.val =3D copy_nonpresent_pte(dst_mm, src_mm, dst_pte, src_pte, - vma, addr, rss); + src_vma, addr, rss); if (entry.val) break; progress +=3D 8; continue; } /* copy_present_pte() will clear `*prealloc' if consumed */ - ret =3D copy_present_pte(dst_mm, src_mm, dst_pte, src_pte, - vma, new, addr, rss, &prealloc); + ret =3D copy_present_pte(dst_vma, src_vma, dst_pte, src_pte, + addr, rss, &prealloc); /* * If we need a pre-allocated page for this pte, drop the * locks, allocate, and try again. @@ -1047,7 +1047,7 @@ static int copy_pte_range(struct mm_struct *dst_mm,= struct mm_struct *src_mm, entry.val =3D 0; } else if (ret) { WARN_ON_ONCE(ret !=3D -EAGAIN); - prealloc =3D page_copy_prealloc(src_mm, vma, addr); + prealloc =3D page_copy_prealloc(src_mm, src_vma, addr); if (!prealloc) return -ENOMEM; /* We've captured and resolved the error. Reset, try again. */ @@ -1061,11 +1061,13 @@ static int copy_pte_range(struct mm_struct *dst_m= m, struct mm_struct *src_mm, return ret; } =20 -static inline int copy_pmd_range(struct mm_struct *dst_mm, struct mm_str= uct *src_mm, - pud_t *dst_pud, pud_t *src_pud, struct vm_area_struct *vma, - struct vm_area_struct *new, - unsigned long addr, unsigned long end) +static inline int +copy_pmd_range(struct vm_area_struct *dst_vma, struct vm_area_struct *sr= c_vma, + pud_t *dst_pud, pud_t *src_pud, unsigned long addr, + unsigned long end) { + struct mm_struct *dst_mm =3D dst_vma->vm_mm; + struct mm_struct *src_mm =3D src_vma->vm_mm; pmd_t *src_pmd, *dst_pmd; unsigned long next; =20 @@ -1078,9 +1080,9 @@ static inline int copy_pmd_range(struct mm_struct *= dst_mm, struct mm_struct *src if (is_swap_pmd(*src_pmd) || pmd_trans_huge(*src_pmd) || pmd_devmap(*src_pmd)) { int err; - VM_BUG_ON_VMA(next-addr !=3D HPAGE_PMD_SIZE, vma); + VM_BUG_ON_VMA(next-addr !=3D HPAGE_PMD_SIZE, src_vma); err =3D copy_huge_pmd(dst_mm, src_mm, - dst_pmd, src_pmd, addr, vma); + dst_pmd, src_pmd, addr, src_vma); if (err =3D=3D -ENOMEM) return -ENOMEM; if (!err) @@ -1089,18 +1091,20 @@ static inline int copy_pmd_range(struct mm_struct= *dst_mm, struct mm_struct *src } if (pmd_none_or_clear_bad(src_pmd)) continue; - if (copy_pte_range(dst_mm, src_mm, dst_pmd, src_pmd, - vma, new, addr, next)) + if (copy_pte_range(dst_vma, src_vma, dst_pmd, src_pmd, + addr, next)) return -ENOMEM; } while (dst_pmd++, src_pmd++, addr =3D next, addr !=3D end); return 0; } =20 -static inline int copy_pud_range(struct mm_struct *dst_mm, struct mm_str= uct *src_mm, - p4d_t *dst_p4d, p4d_t *src_p4d, struct vm_area_struct *vma, - struct vm_area_struct *new, - unsigned long addr, unsigned long end) +static inline int +copy_pud_range(struct vm_area_struct *dst_vma, struct vm_area_struct *sr= c_vma, + p4d_t *dst_p4d, p4d_t *src_p4d, unsigned long addr, + unsigned long end) { + struct mm_struct *dst_mm =3D dst_vma->vm_mm; + struct mm_struct *src_mm =3D src_vma->vm_mm; pud_t *src_pud, *dst_pud; unsigned long next; =20 @@ -1113,9 +1117,9 @@ static inline int copy_pud_range(struct mm_struct *= dst_mm, struct mm_struct *src if (pud_trans_huge(*src_pud) || pud_devmap(*src_pud)) { int err; =20 - VM_BUG_ON_VMA(next-addr !=3D HPAGE_PUD_SIZE, vma); + VM_BUG_ON_VMA(next-addr !=3D HPAGE_PUD_SIZE, src_vma); err =3D copy_huge_pud(dst_mm, src_mm, - dst_pud, src_pud, addr, vma); + dst_pud, src_pud, addr, src_vma); if (err =3D=3D -ENOMEM) return -ENOMEM; if (!err) @@ -1124,18 +1128,19 @@ static inline int copy_pud_range(struct mm_struct= *dst_mm, struct mm_struct *src } if (pud_none_or_clear_bad(src_pud)) continue; - if (copy_pmd_range(dst_mm, src_mm, dst_pud, src_pud, - vma, new, addr, next)) + if (copy_pmd_range(dst_vma, src_vma, dst_pud, src_pud, + addr, next)) return -ENOMEM; } while (dst_pud++, src_pud++, addr =3D next, addr !=3D end); return 0; } =20 -static inline int copy_p4d_range(struct mm_struct *dst_mm, struct mm_str= uct *src_mm, - pgd_t *dst_pgd, pgd_t *src_pgd, struct vm_area_struct *vma, - struct vm_area_struct *new, - unsigned long addr, unsigned long end) +static inline int +copy_p4d_range(struct vm_area_struct *dst_vma, struct vm_area_struct *sr= c_vma, + pgd_t *dst_pgd, pgd_t *src_pgd, unsigned long addr, + unsigned long end) { + struct mm_struct *dst_mm =3D dst_vma->vm_mm; p4d_t *src_p4d, *dst_p4d; unsigned long next; =20 @@ -1147,20 +1152,22 @@ static inline int copy_p4d_range(struct mm_struct= *dst_mm, struct mm_struct *src next =3D p4d_addr_end(addr, end); if (p4d_none_or_clear_bad(src_p4d)) continue; - if (copy_pud_range(dst_mm, src_mm, dst_p4d, src_p4d, - vma, new, addr, next)) + if (copy_pud_range(dst_vma, src_vma, dst_p4d, src_p4d, + addr, next)) return -ENOMEM; } while (dst_p4d++, src_p4d++, addr =3D next, addr !=3D end); return 0; } =20 -int copy_page_range(struct mm_struct *dst_mm, struct mm_struct *src_mm, - struct vm_area_struct *vma, struct vm_area_struct *new) +int +copy_page_range(struct vm_area_struct *dst_vma, struct vm_area_struct *s= rc_vma) { pgd_t *src_pgd, *dst_pgd; unsigned long next; - unsigned long addr =3D vma->vm_start; - unsigned long end =3D vma->vm_end; + unsigned long addr =3D src_vma->vm_start; + unsigned long end =3D src_vma->vm_end; + struct mm_struct *dst_mm =3D dst_vma->vm_mm; + struct mm_struct *src_mm =3D src_vma->vm_mm; struct mmu_notifier_range range; bool is_cow; int ret; @@ -1171,19 +1178,19 @@ int copy_page_range(struct mm_struct *dst_mm, str= uct mm_struct *src_mm, * readonly mappings. The tradeoff is that copy_page_range is more * efficient than faulting. */ - if (!(vma->vm_flags & (VM_HUGETLB | VM_PFNMAP | VM_MIXEDMAP)) && - !vma->anon_vma) + if (!(src_vma->vm_flags & (VM_HUGETLB | VM_PFNMAP | VM_MIXEDMAP)) && + !src_vma->anon_vma) return 0; =20 - if (is_vm_hugetlb_page(vma)) - return copy_hugetlb_page_range(dst_mm, src_mm, vma); + if (is_vm_hugetlb_page(src_vma)) + return copy_hugetlb_page_range(dst_mm, src_mm, src_vma); =20 - if (unlikely(vma->vm_flags & VM_PFNMAP)) { + if (unlikely(src_vma->vm_flags & VM_PFNMAP)) { /* * We do not free on error cases below as remove_vma * gets called on error from higher level routine */ - ret =3D track_pfn_copy(vma); + ret =3D track_pfn_copy(src_vma); if (ret) return ret; } @@ -1194,11 +1201,11 @@ int copy_page_range(struct mm_struct *dst_mm, str= uct mm_struct *src_mm, * parent mm. And a permission downgrade will only happen if * is_cow_mapping() returns true. */ - is_cow =3D is_cow_mapping(vma->vm_flags); + is_cow =3D is_cow_mapping(src_vma->vm_flags); =20 if (is_cow) { mmu_notifier_range_init(&range, MMU_NOTIFY_PROTECTION_PAGE, - 0, vma, src_mm, addr, end); + 0, src_vma, src_mm, addr, end); mmu_notifier_invalidate_range_start(&range); } =20 @@ -1209,8 +1216,8 @@ int copy_page_range(struct mm_struct *dst_mm, struc= t mm_struct *src_mm, next =3D pgd_addr_end(addr, end); if (pgd_none_or_clear_bad(src_pgd)) continue; - if (unlikely(copy_p4d_range(dst_mm, src_mm, dst_pgd, src_pgd, - vma, new, addr, next))) { + if (unlikely(copy_p4d_range(dst_vma, src_vma, dst_pgd, src_pgd, + addr, next))) { ret =3D -ENOMEM; break; } --=20 2.26.2