From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3C4E3C4363C for ; Mon, 21 Sep 2020 21:18:03 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id A396B23A5C for ; Mon, 21 Sep 2020 21:18:02 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="Wvg+SllN" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org A396B23A5C Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 35DAE900004; Mon, 21 Sep 2020 17:18:02 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 334858E0001; Mon, 21 Sep 2020 17:18:02 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1FCA1900004; Mon, 21 Sep 2020 17:18:02 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0198.hostedemail.com [216.40.44.198]) by kanga.kvack.org (Postfix) with ESMTP id 07D3C8E0001 for ; Mon, 21 Sep 2020 17:18:02 -0400 (EDT) Received: from smtpin20.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id C6C38180AD804 for ; Mon, 21 Sep 2020 21:18:01 +0000 (UTC) X-FDA: 77288331162.20.frame56_501293b27148 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin20.hostedemail.com (Postfix) with ESMTP id 9FBB4180C07A3 for ; Mon, 21 Sep 2020 21:18:01 +0000 (UTC) X-HE-Tag: frame56_501293b27148 X-Filterd-Recvd-Size: 8323 Received: from us-smtp-delivery-1.mimecast.com (us-smtp-2.mimecast.com [205.139.110.61]) by imf04.hostedemail.com (Postfix) with ESMTP for ; Mon, 21 Sep 2020 21:18:00 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1600723080; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=wTIN/0m82g/0tsiCt+76p+QfS6VOaJI0W6pqKf7k5hA=; b=Wvg+SllNrXESrbDyVo6g5NUGR6OoJ4l+cMVuu4sBPT6GymfwpITdYZfDlqvt5sr+qsqScT 8SRjrbaXUKrLfBenefTNEsYNWQ8mGQ3YQ2M3gYMtc1pheeJgmYzjZZWGfHJewSjNFguRae aJYFcNsaE/MfVTmhs2FX0KMt0CnleBA= Received: from mail-qv1-f72.google.com (mail-qv1-f72.google.com [209.85.219.72]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-374-uVKSlrjzMHq7LFJWNSwQGg-1; Mon, 21 Sep 2020 17:17:58 -0400 X-MC-Unique: uVKSlrjzMHq7LFJWNSwQGg-1 Received: by mail-qv1-f72.google.com with SMTP id di5so10035777qvb.13 for ; Mon, 21 Sep 2020 14:17:58 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=wTIN/0m82g/0tsiCt+76p+QfS6VOaJI0W6pqKf7k5hA=; b=VhfKrvm347Vw58fpq43ffL7v/cVt2LlqtpAdKkkyhDPBmYwMfqJvAI02kvlplsBuIf 9+zzzpnNo722OOqk/VMBoCpl/Iy2PHJ0ycUJFKfxu3ezfsQ4yD94HRW0FjM2xUikL7ej K1Eo199ZU107uTToJX20+kyXbvGud//B8WIT9ZYfcx9Ki3f0oEr1oTMrBxDG2KuX9wp4 JCYyHCvnjC3OgeSISxb+5/IEgws0NR2iMfFOoh0QEfLIOk/tVtVgGJA8+hmnPzZJy4af HcEdt1oiFNYzEcGeTBlGKDUoE75YRqPJqoPFG9hhsp2IR2lMqV+Y8JD+AMfjaDQ5T7Wy 1dOg== X-Gm-Message-State: AOAM5335VC5ydTg4vMuqablzQQ4zWzQTOwHQkRWOZUOxu0KgY3KKqJlz mPPTLqZWzToivYBIQWNwXXOfjhkBojlJ74QrktO9eQkG0WCy5zwLMf/+td9JKtOmfNyq1FWD/BA JU+4ArFcck+A= X-Received: by 2002:aed:2703:: with SMTP id n3mr1534241qtd.235.1600723077748; Mon, 21 Sep 2020 14:17:57 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzt8RshDX+lvoNSrWZH3Xg5icn5cC7XCPVU5E54lefll5SNgPSjaTbuR6wg7sno/CYs5QBTBg== X-Received: by 2002:aed:2703:: with SMTP id n3mr1534216qtd.235.1600723077485; Mon, 21 Sep 2020 14:17:57 -0700 (PDT) Received: from xz-x1.redhat.com (bras-vprn-toroon474qw-lp130-11-70-53-122-15.dsl.bell.ca. [70.53.122.15]) by smtp.gmail.com with ESMTPSA id h68sm10225108qkf.30.2020.09.21.14.17.55 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 21 Sep 2020 14:17:56 -0700 (PDT) From: Peter Xu To: linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: Jason Gunthorpe , Andrew Morton , Jan Kara , Michal Hocko , Kirill Tkhai , Kirill Shutemov , Hugh Dickins , Peter Xu , Christoph Hellwig , Andrea Arcangeli , John Hubbard , Oleg Nesterov , Leon Romanovsky , Linus Torvalds , Jann Horn Subject: [PATCH 3/5] mm: Rework return value for copy_one_pte() Date: Mon, 21 Sep 2020 17:17:42 -0400 Message-Id: <20200921211744.24758-4-peterx@redhat.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200921211744.24758-1-peterx@redhat.com> References: <20200921211744.24758-1-peterx@redhat.com> MIME-Version: 1.0 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=peterx@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: There's one special path for copy_one_pte() with swap entries, in which add_swap_count_continuation(GFP_ATOMIC) might fail. In that case we'll r= eturn the swp_entry_t so that the caller will release the locks and redo the sa= me thing with GFP_KERNEL. It's confusing when copy_one_pte() must return a swp_entry_t (even if all= the ptes are non-swap entries). More importantly, we face other requirement = to extend this "we need to do something else, but without the locks held" ca= se. Rework the return value into something easier to understand, as defined i= n enum copy_mm_ret. We'll pass the swp_entry_t back using the newly introduced = union copy_mm_data parameter. Another trivial change is to move the reset of the "progress" counter int= o the retry path, so that we'll reset it for other reasons too. This should prepare us with adding new return codes, very soon. Signed-off-by: Peter Xu --- mm/memory.c | 42 +++++++++++++++++++++++++++++------------- 1 file changed, 29 insertions(+), 13 deletions(-) diff --git a/mm/memory.c b/mm/memory.c index 7525147908c4..1530bb1070f4 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -689,16 +689,24 @@ struct page *vm_normal_page_pmd(struct vm_area_stru= ct *vma, unsigned long addr, } #endif =20 +#define COPY_MM_DONE 0 +#define COPY_MM_SWAP_CONT 1 + +struct copy_mm_data { + /* COPY_MM_SWAP_CONT */ + swp_entry_t entry; +}; + /* * copy one vm_area from one task to the other. Assumes the page tables * already present in the new task to be cleared in the whole range * covered by this vma. */ =20 -static inline unsigned long +static inline int copy_one_pte(struct mm_struct *dst_mm, struct mm_struct *src_mm, pte_t *dst_pte, pte_t *src_pte, struct vm_area_struct *vma, - unsigned long addr, int *rss) + unsigned long addr, int *rss, struct copy_mm_data *data) { unsigned long vm_flags =3D vma->vm_flags; pte_t pte =3D *src_pte; @@ -709,8 +717,10 @@ copy_one_pte(struct mm_struct *dst_mm, struct mm_str= uct *src_mm, swp_entry_t entry =3D pte_to_swp_entry(pte); =20 if (likely(!non_swap_entry(entry))) { - if (swap_duplicate(entry) < 0) - return entry.val; + if (swap_duplicate(entry) < 0) { + data->entry =3D entry; + return COPY_MM_SWAP_CONT; + } =20 /* make sure dst_mm is on swapoff's mmlist. */ if (unlikely(list_empty(&dst_mm->mmlist))) { @@ -809,7 +819,7 @@ copy_one_pte(struct mm_struct *dst_mm, struct mm_stru= ct *src_mm, =20 out_set_pte: set_pte_at(dst_mm, addr, dst_pte, pte); - return 0; + return COPY_MM_DONE; } =20 static int copy_pte_range(struct mm_struct *dst_mm, struct mm_struct *sr= c_mm, @@ -820,9 +830,9 @@ static int copy_pte_range(struct mm_struct *dst_mm, s= truct mm_struct *src_mm, pte_t *orig_src_pte, *orig_dst_pte; pte_t *src_pte, *dst_pte; spinlock_t *src_ptl, *dst_ptl; - int progress =3D 0; + int progress, copy_ret =3D COPY_MM_DONE; int rss[NR_MM_COUNTERS]; - swp_entry_t entry =3D (swp_entry_t){0}; + struct copy_mm_data data; =20 again: init_rss_vec(rss); @@ -837,6 +847,7 @@ static int copy_pte_range(struct mm_struct *dst_mm, s= truct mm_struct *src_mm, orig_dst_pte =3D dst_pte; arch_enter_lazy_mmu_mode(); =20 + progress =3D 0; do { /* * We are holding two locks at this point - either of them @@ -852,9 +863,9 @@ static int copy_pte_range(struct mm_struct *dst_mm, s= truct mm_struct *src_mm, progress++; continue; } - entry.val =3D copy_one_pte(dst_mm, src_mm, dst_pte, src_pte, - vma, addr, rss); - if (entry.val) + copy_ret =3D copy_one_pte(dst_mm, src_mm, dst_pte, src_pte, + vma, addr, rss, &data); + if (copy_ret !=3D COPY_MM_DONE) break; progress +=3D 8; } while (dst_pte++, src_pte++, addr +=3D PAGE_SIZE, addr !=3D end); @@ -866,13 +877,18 @@ static int copy_pte_range(struct mm_struct *dst_mm,= struct mm_struct *src_mm, pte_unmap_unlock(orig_dst_pte, dst_ptl); cond_resched(); =20 - if (entry.val) { - if (add_swap_count_continuation(entry, GFP_KERNEL) < 0) + switch (copy_ret) { + case COPY_MM_SWAP_CONT: + if (add_swap_count_continuation(data.entry, GFP_KERNEL) < 0) return -ENOMEM; - progress =3D 0; + break; + default: + break; } + if (addr !=3D end) goto again; + return 0; } =20 --=20 2.26.2