From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 5D2F8C52D7C for ; Mon, 12 Aug 2024 18:30:47 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Type:In-Reply-To: MIME-Version:References:Message-ID:Subject:Cc:To:From:Date:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=3lzHa+4oPgNh0JbtSfXdnixp3cbcHkW2yBVpxgt+hHU=; b=j/NdHfHsNvaLwxr0rtPwGcBUTt ZIUi6Y21SHMyB0UmBw5jPkUV/Fe7jSigBl8oatAMBm8DLjNk4+FrMFRzLiQZeo11cnBDpOG2Oi0BA G9rCljAdZExX44i6fGWBh4kVVnN0RB2keqgdTi25COMZFBqUq7DpXKaTuxG29AB2PNNgcwT4xpvf+ sZm5AwhF+oexTW4vMZaYHwGBsfzgK5+6fWkjin7SuwXophWBcRugchDlot2r235+HNiY53gEc+/2J 3bvSBRdYnP1kT/7sDnGvKVNs1by0XU28RbJkKhmk07BmKt1FSV2wgxSb3VOil1+PEEmH77AmGfTbt z8xCaKFg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1sdZoQ-00000001DLo-0GKQ; Mon, 12 Aug 2024 18:30:34 +0000 Received: from us-smtp-delivery-124.mimecast.com ([170.10.133.124]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1sdZnn-00000001DAs-3i4f for linux-arm-kernel@lists.infradead.org; Mon, 12 Aug 2024 18:29:57 +0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1723487394; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=3lzHa+4oPgNh0JbtSfXdnixp3cbcHkW2yBVpxgt+hHU=; b=UvCXJi1An5lGZDnUPhAVlszV9GQMy70s2GHFex/OwEGSkYdS4xnjf64tWenw1EJz9HzCX1 TqG0EE05MVWUX2V4vZfewCiydog+irhcepnOuMpQ+wIB0XjLypsWXqvqrufa29kSfBEZV9 UC+BWVd3FYoj5x8dmowGhB1YaYoUMLU= Received: from mail-vs1-f71.google.com (mail-vs1-f71.google.com [209.85.217.71]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-86-3Mcc7ph3MUmPHxGzFYJMdA-1; Mon, 12 Aug 2024 14:29:52 -0400 X-MC-Unique: 3Mcc7ph3MUmPHxGzFYJMdA-1 Received: by mail-vs1-f71.google.com with SMTP id ada2fe7eead31-4928d06cfebso186083137.1 for ; Mon, 12 Aug 2024 11:29:52 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1723487391; x=1724092191; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:x-gm-message-state:from:to:cc:subject:date :message-id:reply-to; bh=3lzHa+4oPgNh0JbtSfXdnixp3cbcHkW2yBVpxgt+hHU=; b=vyeSpr+zdyOavJmFWFg/EwxRUVyc5Pv7y9k2GorwlYP7yqfMlXUhBly3HejLzElYhP OziCVaqtShH0TNTCMUfV8BDcIu/4eeZGMSADHIkc65cE7zlttcfApA+cAkvHFslMCEcq x4K+VNStu0ql79DnaK98cOoa1UAoT9PqoVlCYwNg9rJvdawB551KWcDDRWSytaENmi2S FeviEfW2acm86ULEcLhwHp0XHXNBege3M+6ShlR6x3AeDpz4SlOnZDqj4HK4KXOqg8P/ 4IYvKqrZJEkVYWItgqbMGga1BoklDPrxdGy/wu/5Lrw4FhEg0IaKKeHjjqyFllJH2w0w r+Ow== X-Forwarded-Encrypted: i=1; AJvYcCXrzF0qu3Kk/PgovA/4Em0VM2H9A3RPX/PHVSi4cf2yWk23xPEJsAuCYCSj4TgLILAgv9CDnEZjyjoCvyUKDLvi@lists.infradead.org X-Gm-Message-State: AOJu0YzrNNQ0Z8UNSwLc21z35kkh0dARQHFBYN/YeoxQJIPGyiV2B2dv pPTfoU4zwtEhw8yRALIli71QV5nS9eSFmH4lWTycUOA4SYLKbpwlCzrT2ZtmXwNoRIt/Y365JQO 2pqRVYDT/cih2cBmrJxzMtEB62uzFTust3XDInkryIZ0kPOo79SVo7rp4kXc5C4cBKec2boNG X-Received: by 2002:a05:6102:38d1:b0:493:31f9:d14e with SMTP id ada2fe7eead31-4974398cf6amr814420137.2.1723487391385; Mon, 12 Aug 2024 11:29:51 -0700 (PDT) X-Google-Smtp-Source: AGHT+IEgRvFG1AyF51iCR4QUK+iwha9x9ZVtk0r/+P1oFdTeH6YoJu+ZSYhqn/ax3TJ2lpn0JjOZGQ== X-Received: by 2002:a05:6102:38d1:b0:493:31f9:d14e with SMTP id ada2fe7eead31-4974398cf6amr814391137.2.1723487390933; Mon, 12 Aug 2024 11:29:50 -0700 (PDT) Received: from x1n (pool-99-254-121-117.cpe.net.cable.rogers.com. [99.254.121.117]) by smtp.gmail.com with ESMTPSA id d75a77b69052e-4531c26dc23sm25342021cf.75.2024.08.12.11.29.49 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 12 Aug 2024 11:29:50 -0700 (PDT) Date: Mon, 12 Aug 2024 14:29:47 -0400 From: Peter Xu To: David Hildenbrand Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Sean Christopherson , Oscar Salvador , Jason Gunthorpe , Axel Rasmussen , linux-arm-kernel@lists.infradead.org, x86@kernel.org, Will Deacon , Gavin Shan , Paolo Bonzini , Zi Yan , Andrew Morton , Catalin Marinas , Ingo Molnar , Alistair Popple , Borislav Petkov , Thomas Gleixner , kvm@vger.kernel.org, Dave Hansen , Alex Williamson , Yan Zhao Subject: Re: [PATCH 07/19] mm/fork: Accept huge pfnmap entries Message-ID: References: <20240809160909.1023470-1-peterx@redhat.com> <20240809160909.1023470-8-peterx@redhat.com> <8ef394e6-a964-41c4-b33c-0e940b6b9bd8@redhat.com> MIME-Version: 1.0 In-Reply-To: <8ef394e6-a964-41c4-b33c-0e940b6b9bd8@redhat.com> X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Type: text/plain; charset=utf-8 Content-Disposition: inline X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240812_112956_022758_EE01E500 X-CRM114-Status: GOOD ( 55.42 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org On Fri, Aug 09, 2024 at 07:59:58PM +0200, David Hildenbrand wrote: > On 09.08.24 19:15, Peter Xu wrote: > > On Fri, Aug 09, 2024 at 06:32:44PM +0200, David Hildenbrand wrote: > > > On 09.08.24 18:08, Peter Xu wrote: > > > > Teach the fork code to properly copy pfnmaps for pmd/pud levels. Pud is > > > > much easier, the write bit needs to be persisted though for writable and > > > > shared pud mappings like PFNMAP ones, otherwise a follow up write in either > > > > parent or child process will trigger a write fault. > > > > > > > > Do the same for pmd level. > > > > > > > > Signed-off-by: Peter Xu > > > > --- > > > > mm/huge_memory.c | 27 ++++++++++++++++++++++++--- > > > > 1 file changed, 24 insertions(+), 3 deletions(-) > > > > > > > > diff --git a/mm/huge_memory.c b/mm/huge_memory.c > > > > index 6568586b21ab..015c9468eed5 100644 > > > > --- a/mm/huge_memory.c > > > > +++ b/mm/huge_memory.c > > > > @@ -1375,6 +1375,22 @@ int copy_huge_pmd(struct mm_struct *dst_mm, struct mm_struct *src_mm, > > > > pgtable_t pgtable = NULL; > > > > int ret = -ENOMEM; > > > > + pmd = pmdp_get_lockless(src_pmd); > > > > + if (unlikely(pmd_special(pmd))) { > > > > + dst_ptl = pmd_lock(dst_mm, dst_pmd); > > > > + src_ptl = pmd_lockptr(src_mm, src_pmd); > > > > + spin_lock_nested(src_ptl, SINGLE_DEPTH_NESTING); > > > > + /* > > > > + * No need to recheck the pmd, it can't change with write > > > > + * mmap lock held here. > > > > + */ > > > > + if (is_cow_mapping(src_vma->vm_flags) && pmd_write(pmd)) { > > > > + pmdp_set_wrprotect(src_mm, addr, src_pmd); > > > > + pmd = pmd_wrprotect(pmd); > > > > + } > > > > + goto set_pmd; > > > > + } > > > > + > > > > > > I strongly assume we should be using using vm_normal_page_pmd() instead of > > > pmd_page() further below. pmd_special() should be mostly limited to GUP-fast > > > and vm_normal_page_pmd(). > > > > One thing to mention that it has this: > > > > if (!vma_is_anonymous(dst_vma)) > > return 0; > > Another obscure thing in this function. It's not the job of copy_huge_pmd() > to make the decision whether to copy, it's the job of vma_needs_copy() in > copy_page_range(). > > And now I have to suspect that uffd-wp is broken with this function, because > as vma_needs_copy() clearly states, we must copy, and we don't do that for > PMDs. Ugh. > > What a mess, we should just do what we do for PTEs and we will be fine ;) IIUC it's not a problem: file uffd-wp is different from anonymous, in that it pushes everything down to ptes. It means if we skipped one huge pmd here for file, then it's destined to have nothing to do with uffd-wp, otherwise it should have already been split at the first attempt to wr-protect. > > Also, we call copy_huge_pmd() only if "is_swap_pmd(*src_pmd) || > pmd_trans_huge(*src_pmd) || pmd_devmap(*src_pmd)" > > Would that even be the case with PFNMAP? I suspect that pmd_trans_huge() > would return "true" for special pfnmap, which is rather "surprising", but > fortunate for us. It's definitely not surprising to me as that's the plan.. and I thought it shoulidn't be surprising to you - if you remember before I sent this one, I tried to decouple that here with the "thp agnostic" series: https://lore.kernel.org/r/20240717220219.3743374-1-peterx@redhat.com in which you reviewed it (which I appreciated). So yes, pfnmap on pmd so far will report pmd_trans_huge==true. > > Likely we should be calling copy_huge_pmd() if pmd_leaf() ... cleanup for > another day. Yes, ultimately it should really be a pmd_leaf(), but since I didn't get much feedback there, and that can further postpone this series from being posted I'm afraid, then I decided to just move on with "taking pfnmap as THPs". The corresponding change on this path is here in that series: https://lore.kernel.org/all/20240717220219.3743374-7-peterx@redhat.com/ @@ -1235,8 +1235,7 @@ copy_pmd_range(struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma, src_pmd = pmd_offset(src_pud, addr); do { next = pmd_addr_end(addr, end); - if (is_swap_pmd(*src_pmd) || pmd_trans_huge(*src_pmd) - || pmd_devmap(*src_pmd)) { + if (is_swap_pmd(*src_pmd) || pmd_is_leaf(*src_pmd)) { int err; VM_BUG_ON_VMA(next-addr != HPAGE_PMD_SIZE, src_vma); err = copy_huge_pmd(dst_mm, src_mm, dst_pmd, src_pmd, > > > > > So it's only about anonymous below that. In that case I feel like the > > pmd_page() is benign, and actually good. > > Yes, it would likely currently work. > > > > > Though what you're saying here made me notice my above check doesn't seem > > to be necessary, I mean, "(is_cow_mapping(src_vma->vm_flags) && > > pmd_write(pmd))" can't be true when special bit is set, aka, pfnmaps.. and > > if it's writable for CoW it means it's already an anon. > > > > I think I can probably drop that line there, perhaps with a > > VM_WARN_ON_ONCE() making sure it won't happen. > > > > > > > > Again, we should be doing this similar to how we handle PTEs. > > > > > > I'm a bit confused about the "unlikely(!pmd_trans_huge(pmd)" check, below: > > > what else should we have here if it's not a migration entry but a present > > > entry? > > > > I had a feeling that it was just a safety belt since the 1st day of thp > > when Andrea worked that out, so that it'll work with e.g. file truncation > > races. > > > > But with current code it looks like it's only anonymous indeed, so looks > > not possible at least from that pov. > > Yes, as stated above, likely broken with UFFD-WP ... > > I really think we should make this code just behave like it would with PTEs, > instead of throwing in more "different" handling. So it could simply because file / anon uffd-wp work very differently. Let me know if you still spot something that is suspicious, but in all cases I guess we can move on with this series, but maybe if you can find something I can tackle them together when I decide to go back to the mremap() issues in the other thread. Thanks, -- Peter Xu