From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id CD9C6C3DA7F for ; Mon, 12 Aug 2024 19:06:51 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Type:In-Reply-To: MIME-Version:References:Message-ID:Subject:Cc:To:From:Date:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=HzakOI9YGqv9HtFf9jIxuThCq3IEoBU6xpTlkdeG+TM=; b=0/VrVK9OYgoVPvgjGNGak7YNfx RAZfdBX/27GZ3fOLwPNF/ikrdVGFx2VXPTkF7MCuNWAR2LKu6+ks/ID4qUMq+wU5aJaaeHh54glMR u8jE3gJsMvSgR9J1ntrb1G/9sIi+WHsmbo8Uje7+QQDHndd62FCVHoIi3kku2zk7EvoEvTVysaNHC GC2orc/6Puzc/1PrE81pWsbroOin9Fw5/RYACSlV3a7nMCX/T5ADcK7J+VpfFMtb2oed3M/ElDY88 vOwgIQlzfpBwOVZZPrrPGhz0IZT3iIuK9Wp69Mtnp0Ks9M8oPLUbXQm8tPzf4xLeQWoGnabia+GIi pVtoZXyA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1sdaNL-00000001HkR-216p; Mon, 12 Aug 2024 19:06:39 +0000 Received: from us-smtp-delivery-124.mimecast.com ([170.10.133.124]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1sdaMe-00000001HUL-05QV for linux-arm-kernel@lists.infradead.org; Mon, 12 Aug 2024 19:06:01 +0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1723489554; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=HzakOI9YGqv9HtFf9jIxuThCq3IEoBU6xpTlkdeG+TM=; b=JWtJQr0pjOiYDniv3k2CKLwf64+TSaQTvC1B3/Y0MzKs0b8qAfTRwoZisCYOjZuJlEBCr5 kYytbkCEbvv/BKGyzZvgg5+VxmwTi0yA/v9JzDoiyX10u62vtPyvk0PAGrFcOuMAr9YeGe aAhgzRgOFYn6CKBLyVNMf9ZuPcj4NPU= Received: from mail-qv1-f69.google.com (mail-qv1-f69.google.com [209.85.219.69]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-610-uUVXuID5Nv6WSJvPjyU39w-1; Mon, 12 Aug 2024 15:05:53 -0400 X-MC-Unique: uUVXuID5Nv6WSJvPjyU39w-1 Received: by mail-qv1-f69.google.com with SMTP id 6a1803df08f44-6ba92dd246fso10461846d6.0 for ; Mon, 12 Aug 2024 12:05:53 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1723489553; x=1724094353; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:x-gm-message-state:from:to:cc:subject:date :message-id:reply-to; bh=HzakOI9YGqv9HtFf9jIxuThCq3IEoBU6xpTlkdeG+TM=; b=rFNh0ONoBnD6J5BYsMODZvZcXKyseeMUzdEIMAMXHxwHHHQ6qzQS5bf+sJH3CPFt2Y bTVRYwtoaS5Jjer0d0ohNL/naVVrZafWS66CkUzS/VfYhqR5fMNHSit3L+rA1HsvJPo/ t7UePuVmJ+9TFxSLOaEV6bSwkn2/PFIh3/EDDept2CrnAw41b2C44Ydc961n18tcenxk LGNLZ6KL2QVpSGMsPdDTnAJVCroBxcn3oQPM8mgzrfeH7XH3R4tYhp7wX+qj8H7mpM4s +pvhfeSf0EakkoMr3oSj+w5bghHARhTNLFJOxP7A9NfEy0aGlQ6+Ta2EBcesrJvIrPsk 0MRw== X-Forwarded-Encrypted: i=1; AJvYcCU8udly1DG2Vysf2dMy2zYbX3OI3/3FgW6b7pXvCiRcQPijisszpNM9OeQB67t20qSK+d6JNjvcdm99K117VvQ7@lists.infradead.org X-Gm-Message-State: AOJu0YxDNBSbhnDs8GGdr2i8Q1fQjjeLwy4W9Zmo8NjxRB9O3f1tujZz 6Rvu9aNdH3RsziF2CcCEIc2W3TK71RKnDV2S/knBbT/iTme1GnaORK7M+oxNzOF3S2EwRLMbA0+ oZljUR0WWgXw1infq7DwI4vqPm5biFy/rFG6mgc1I6X+t570SslckA1FV5U713sXonPRvAaVL X-Received: by 2002:a05:6214:e41:b0:6bf:5037:34f2 with SMTP id 6a1803df08f44-6bf5266e6damr316416d6.0.1723489552705; Mon, 12 Aug 2024 12:05:52 -0700 (PDT) X-Google-Smtp-Source: AGHT+IHULEPrzuVK7C4EVg3OxKvCsQnbJxY4NNhAXiFtPbLqFyv/lM4Rh6yLbZbETy8EfhglpTaRGQ== X-Received: by 2002:a05:6214:e41:b0:6bf:5037:34f2 with SMTP id 6a1803df08f44-6bf5266e6damr316036d6.0.1723489552107; Mon, 12 Aug 2024 12:05:52 -0700 (PDT) Received: from x1n (pool-99-254-121-117.cpe.net.cable.rogers.com. [99.254.121.117]) by smtp.gmail.com with ESMTPSA id 6a1803df08f44-6bd82e6dcb3sm27136416d6.144.2024.08.12.12.05.49 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 12 Aug 2024 12:05:51 -0700 (PDT) Date: Mon, 12 Aug 2024 15:05:48 -0400 From: Peter Xu To: David Hildenbrand Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Sean Christopherson , Oscar Salvador , Jason Gunthorpe , Axel Rasmussen , linux-arm-kernel@lists.infradead.org, x86@kernel.org, Will Deacon , Gavin Shan , Paolo Bonzini , Zi Yan , Andrew Morton , Catalin Marinas , Ingo Molnar , Alistair Popple , Borislav Petkov , Thomas Gleixner , kvm@vger.kernel.org, Dave Hansen , Alex Williamson , Yan Zhao Subject: Re: [PATCH 07/19] mm/fork: Accept huge pfnmap entries Message-ID: References: <20240809160909.1023470-1-peterx@redhat.com> <20240809160909.1023470-8-peterx@redhat.com> <8ef394e6-a964-41c4-b33c-0e940b6b9bd8@redhat.com> <9155deaa-b6c5-4e6c-95a7-9a5311b7085a@redhat.com> MIME-Version: 1.0 In-Reply-To: <9155deaa-b6c5-4e6c-95a7-9a5311b7085a@redhat.com> X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Type: text/plain; charset=utf-8 Content-Disposition: inline X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240812_120556_163846_E6CC6E62 X-CRM114-Status: GOOD ( 54.26 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org On Mon, Aug 12, 2024 at 08:50:12PM +0200, David Hildenbrand wrote: > On 12.08.24 20:29, Peter Xu wrote: > > On Fri, Aug 09, 2024 at 07:59:58PM +0200, David Hildenbrand wrote: > > > On 09.08.24 19:15, Peter Xu wrote: > > > > On Fri, Aug 09, 2024 at 06:32:44PM +0200, David Hildenbrand wrote: > > > > > On 09.08.24 18:08, Peter Xu wrote: > > > > > > Teach the fork code to properly copy pfnmaps for pmd/pud levels. Pud is > > > > > > much easier, the write bit needs to be persisted though for writable and > > > > > > shared pud mappings like PFNMAP ones, otherwise a follow up write in either > > > > > > parent or child process will trigger a write fault. > > > > > > > > > > > > Do the same for pmd level. > > > > > > > > > > > > Signed-off-by: Peter Xu > > > > > > --- > > > > > > mm/huge_memory.c | 27 ++++++++++++++++++++++++--- > > > > > > 1 file changed, 24 insertions(+), 3 deletions(-) > > > > > > > > > > > > diff --git a/mm/huge_memory.c b/mm/huge_memory.c > > > > > > index 6568586b21ab..015c9468eed5 100644 > > > > > > --- a/mm/huge_memory.c > > > > > > +++ b/mm/huge_memory.c > > > > > > @@ -1375,6 +1375,22 @@ int copy_huge_pmd(struct mm_struct *dst_mm, struct mm_struct *src_mm, > > > > > > pgtable_t pgtable = NULL; > > > > > > int ret = -ENOMEM; > > > > > > + pmd = pmdp_get_lockless(src_pmd); > > > > > > + if (unlikely(pmd_special(pmd))) { > > > > > > + dst_ptl = pmd_lock(dst_mm, dst_pmd); > > > > > > + src_ptl = pmd_lockptr(src_mm, src_pmd); > > > > > > + spin_lock_nested(src_ptl, SINGLE_DEPTH_NESTING); > > > > > > + /* > > > > > > + * No need to recheck the pmd, it can't change with write > > > > > > + * mmap lock held here. > > > > > > + */ > > > > > > + if (is_cow_mapping(src_vma->vm_flags) && pmd_write(pmd)) { > > > > > > + pmdp_set_wrprotect(src_mm, addr, src_pmd); > > > > > > + pmd = pmd_wrprotect(pmd); > > > > > > + } > > > > > > + goto set_pmd; > > > > > > + } > > > > > > + > > > > > > > > > > I strongly assume we should be using using vm_normal_page_pmd() instead of > > > > > pmd_page() further below. pmd_special() should be mostly limited to GUP-fast > > > > > and vm_normal_page_pmd(). > > > > > > > > One thing to mention that it has this: > > > > > > > > if (!vma_is_anonymous(dst_vma)) > > > > return 0; > > > > > > Another obscure thing in this function. It's not the job of copy_huge_pmd() > > > to make the decision whether to copy, it's the job of vma_needs_copy() in > > > copy_page_range(). > > > > > > And now I have to suspect that uffd-wp is broken with this function, because > > > as vma_needs_copy() clearly states, we must copy, and we don't do that for > > > PMDs. Ugh. > > > > > > What a mess, we should just do what we do for PTEs and we will be fine ;) > > > > IIUC it's not a problem: file uffd-wp is different from anonymous, in that > > it pushes everything down to ptes. > > > > It means if we skipped one huge pmd here for file, then it's destined to > > have nothing to do with uffd-wp, otherwise it should have already been > > split at the first attempt to wr-protect. > > Is that also true for UFFD_FEATURE_WP_ASYNC, when we call > pagemap_scan_thp_entry()->make_uffd_wp_pmd() ? > > I'm not immediately finding the code that does the "pushes everything down > to ptes", so I might miss that part. UFFDIO_WRITEPROTECT should have all those covered, while I guess you're right, looks like the pagemap ioctl is overlooked.. > > > > > > > > > Also, we call copy_huge_pmd() only if "is_swap_pmd(*src_pmd) || > > > pmd_trans_huge(*src_pmd) || pmd_devmap(*src_pmd)" > > > > > > Would that even be the case with PFNMAP? I suspect that pmd_trans_huge() > > > would return "true" for special pfnmap, which is rather "surprising", but > > > fortunate for us. > > > > It's definitely not surprising to me as that's the plan.. and I thought it > > shoulidn't be surprising to you - if you remember before I sent this one, I > > tried to decouple that here with the "thp agnostic" series: > > > > https://lore.kernel.org/r/20240717220219.3743374-1-peterx@redhat.com > > > > in which you reviewed it (which I appreciated). > > > > So yes, pfnmap on pmd so far will report pmd_trans_huge==true. > > I review way to much stuff to remember everything :) That certainly screams > for a cleanup ... Definitely. > > > > > > > > > Likely we should be calling copy_huge_pmd() if pmd_leaf() ... cleanup for > > > another day. > > > > Yes, ultimately it should really be a pmd_leaf(), but since I didn't get > > much feedback there, and that can further postpone this series from being > > posted I'm afraid, then I decided to just move on with "taking pfnmap as > > THPs". The corresponding change on this path is here in that series: > > > > https://lore.kernel.org/all/20240717220219.3743374-7-peterx@redhat.com/ > > > > @@ -1235,8 +1235,7 @@ copy_pmd_range(struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma, > > src_pmd = pmd_offset(src_pud, addr); > > do { > > next = pmd_addr_end(addr, end); > > - if (is_swap_pmd(*src_pmd) || pmd_trans_huge(*src_pmd) > > - || pmd_devmap(*src_pmd)) { > > + if (is_swap_pmd(*src_pmd) || pmd_is_leaf(*src_pmd)) { > > int err; > > VM_BUG_ON_VMA(next-addr != HPAGE_PMD_SIZE, src_vma); > > err = copy_huge_pmd(dst_mm, src_mm, dst_pmd, src_pmd, > > > > Ah, good. > > [...] > > > > Yes, as stated above, likely broken with UFFD-WP ... > > > > > > I really think we should make this code just behave like it would with PTEs, > > > instead of throwing in more "different" handling. > > > > So it could simply because file / anon uffd-wp work very differently. > > Or because nobody wants to clean up that code ;) I think in this case maybe the fork() part is all fine? As long as we can switch pagemap ioctl to do proper break-downs when necessary, or even try to reuse what UFFDIO_WRITEPROTECT does if still possible in some way. In all cases, definitely sounds like another separate effort. Thanks, -- Peter Xu