From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 5DC50C3DA4A for ; Fri, 9 Aug 2024 21:38:12 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Type:In-Reply-To: MIME-Version:References:Message-ID:Subject:Cc:To:From:Date:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=vLaN61XvMItpso5ET5hnaX5H7GrLf1LjcFKL/77gKXA=; b=ikX2EEj/3EiJEMFJjchXlHNSeC bv52tJ7g4J705AmCAghhqIB8Xsqt9/fVOkPaRvqLf0TZ/HG0KWJgHuh9I42ZkzInTLdUsPzILo8GB NiSfO1CspHdGH5Vwvwc7ZuzxLy+VAyFlHXbk9YsqMrW2Oy0tJywfMAPa4oTt+UA1ph1Z+EiUpe+KH ZO23Sj7nS88VKZevl4efYqNmJmguz+hquJp3B+7wBgvLUIT+/1WiYGVGwo0uqCzaVGcvvftx8WTjq 7cIpy7O2oODfSnghvrfIoYYOPSs+xJJybY7hXlFBuKEPocOFIQJN2AaMc/XSyJruI4Eut/f9JxatB SGzQzZzw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1scXJ8-0000000CbuM-3wAL; Fri, 09 Aug 2024 21:37:59 +0000 Received: from us-smtp-delivery-124.mimecast.com ([170.10.129.124]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1scXIZ-0000000Cbou-054D for linux-arm-kernel@lists.infradead.org; Fri, 09 Aug 2024 21:37:24 +0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1723239441; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=vLaN61XvMItpso5ET5hnaX5H7GrLf1LjcFKL/77gKXA=; b=KjrCaZVGoHENTdpuYKCZVJK3VfbfQoc3TV/wxa0aq7IZOQBWKaVaIAJv3ool4zKDYE7x2W 8El5HZOKbgRrQhwRum3yIPESY+0uqgue7rB+dRoDo5pAuZfqUFSfJpbDBzZvJpHuP/Szmj hOJSRDa26c5vN/WJbGPqlDOwYmb9s50= Received: from mail-oi1-f198.google.com (mail-oi1-f198.google.com [209.85.167.198]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-601-KJ8FnilvNnOvQ6Vw7u7xbg-1; Fri, 09 Aug 2024 17:37:19 -0400 X-MC-Unique: KJ8FnilvNnOvQ6Vw7u7xbg-1 Received: by mail-oi1-f198.google.com with SMTP id 5614622812f47-3dc29db029cso654738b6e.2 for ; Fri, 09 Aug 2024 14:37:19 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1723239439; x=1723844239; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:x-gm-message-state:from:to:cc:subject:date :message-id:reply-to; bh=vLaN61XvMItpso5ET5hnaX5H7GrLf1LjcFKL/77gKXA=; b=id6RCvMm7PVIIrt5WMgIiUjxq+Sq1DxNMI0i/LUYbZZnZqoYovi43tQ2fz47V7AnNC wMQjOk/WPJbIXhjnaLdRU0eYipXrSC8UhzRElPxEKGlusnPFK+aqznjyCWkjSFywhFtt JXomvsvr1DY4YfjFJlw75pvFg8sGEdcEvLZW+hYJr4ObUPQt16qex9MiF4B9bDSk15Xf HA4M/aTnYBcL8A50qTpZJ7Tz81206AkuVOIs80QbsBwztIqJnIgGVRfsl7adobyUmePJ oj0sXw6+LlB4qyFkOvCyYRwEjSU7IRBjIntFJATbMCp459osWRmbU+yrqoo6IUwyn99e HamA== X-Forwarded-Encrypted: i=1; AJvYcCUXgOdxLJIi/ygHLmRMwnd43zHpH/iXMxdgwc1jhFjdF+2971wjgThdVQvi03nBwlswk9sZMjCeZwJKEJxlH60mUvBhfxoBhATAN095CPYXa9beIC4= X-Gm-Message-State: AOJu0Yx60puhe79QkFgA4WsUsFfIzwZqiLNcwWOQnfh3vmt7KNoP9JlD H16Rb5V6IKtxl/wJrE0Ooh1vexVFwxIdDqC2xCnyfUl/h1nlDzW44hZ483jWvaIzoie0q8/FTVo Da+HNNH+CRImUq5TJzfdnQkXTCwGIS8AcuR/4i4EpheJDpe0xIpga+Zi4V2T+jwZlv45WLFSb X-Received: by 2002:a05:6808:2113:b0:3d5:6338:49de with SMTP id 5614622812f47-3dc416e1f18mr1830101b6e.5.1723239438874; Fri, 09 Aug 2024 14:37:18 -0700 (PDT) X-Google-Smtp-Source: AGHT+IHwM82YW2GARYPlTotghl2UYBNZ3FORuZWjx1vF1xR71aVPbyjVU6f+5Zh59bD0enYavl9npw== X-Received: by 2002:a05:6808:2113:b0:3d5:6338:49de with SMTP id 5614622812f47-3dc416e1f18mr1830084b6e.5.1723239438516; Fri, 09 Aug 2024 14:37:18 -0700 (PDT) Received: from x1n (pool-99-254-121-117.cpe.net.cable.rogers.com. [99.254.121.117]) by smtp.gmail.com with ESMTPSA id d75a77b69052e-4531c1c30b7sm1479831cf.22.2024.08.09.14.37.16 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 09 Aug 2024 14:37:17 -0700 (PDT) Date: Fri, 9 Aug 2024 17:37:14 -0400 From: Peter Xu To: David Hildenbrand Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Sean Christopherson , Oscar Salvador , Jason Gunthorpe , Axel Rasmussen , linux-arm-kernel@lists.infradead.org, x86@kernel.org, Will Deacon , Gavin Shan , Paolo Bonzini , Zi Yan , Andrew Morton , Catalin Marinas , Ingo Molnar , Alistair Popple , Borislav Petkov , Thomas Gleixner , kvm@vger.kernel.org, Dave Hansen , Alex Williamson , Yan Zhao Subject: Re: [PATCH 06/19] mm/pagewalk: Check pfnmap early for folio_walk_start() Message-ID: References: <20240809160909.1023470-1-peterx@redhat.com> <20240809160909.1023470-7-peterx@redhat.com> MIME-Version: 1.0 In-Reply-To: X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Type: text/plain; charset=utf-8 Content-Disposition: inline X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240809_143723_158520_361E701F X-CRM114-Status: GOOD ( 48.87 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org On Fri, Aug 09, 2024 at 07:25:36PM +0200, David Hildenbrand wrote: > On 09.08.24 18:54, Peter Xu wrote: > > On Fri, Aug 09, 2024 at 06:20:06PM +0200, David Hildenbrand wrote: > > > On 09.08.24 18:08, Peter Xu wrote: > > > > Pfnmaps can always be identified with special bits in the ptes/pmds/puds. > > > > However that's unnecessary if the vma is stable, and when it's mapped under > > > > VM_PFNMAP | VM_IO. > > > > > > > > Instead of adding similar checks in all the levels for huge pfnmaps, let > > > > folio_walk_start() fail even earlier for these mappings. It's also > > > > something gup-slow already does, so make them match. > > > > > > > > Cc: David Hildenbrand > > > > Signed-off-by: Peter Xu > > > > --- > > > > mm/pagewalk.c | 5 +++++ > > > > 1 file changed, 5 insertions(+) > > > > > > > > diff --git a/mm/pagewalk.c b/mm/pagewalk.c > > > > index cd79fb3b89e5..fd3965efe773 100644 > > > > --- a/mm/pagewalk.c > > > > +++ b/mm/pagewalk.c > > > > @@ -727,6 +727,11 @@ struct folio *folio_walk_start(struct folio_walk *fw, > > > > p4d_t *p4dp; > > > > mmap_assert_locked(vma->vm_mm); > > > > + > > > > + /* It has no folio backing the mappings at all.. */ > > > > + if (vma->vm_flags & (VM_IO | VM_PFNMAP)) > > > > + return NULL; > > > > + > > > > > > That is in general not what we want, and we still have some places that > > > wrongly hard-code that behavior. > > > > > > In a MAP_PRIVATE mapping you might have anon pages that we can happily walk. > > > > > > vm_normal_page() / vm_normal_page_pmd() [and as commented as a TODO, > > > vm_normal_page_pud()] should be able to identify PFN maps and reject them, > > > no? > > > > Yep, I think we can also rely on special bit. > > > > When I was working on this whole series I must confess I am already > > confused on the real users of MAP_PRIVATE pfnmaps. E.g. we probably don't > > need either PFNMAP for either mprotect/fork/... at least for our use case, > > then VM_PRIVATE is even one step further. > > Yes, it's rather a corner case indeed. > > > > Here I chose to follow gup-slow, and I suppose you meant that's also wrong? > > I assume just nobody really noticed, just like nobody noticed that > walk_page_test() skips VM_PFNMAP (but not VM_IO :) ). I noticed it, and that's one of the reasons why this series can be small, as walk page callers are intact. > > Your process memory stats will likely miss anon folios on COW PFNMAP > mappings ... in the rare cases where they exist (e.g., mmap() of /dev/mem). Do you mean /proc/$PID/status? I thought that (aka, mm counters) should be fine with anon pages CoWed on top of private pfnmaps, but possibly I misunderstood what you meant. > > > If so, would it make sense we keep them aligned as of now, and change them > > altogether? Or do you think we should just rely on the special bits? > > GUP already refuses to work on a lot of other stuff, so likely not a good > use of time unless somebody complains. > > But yes, long-term we should make all code either respect that it could > happen (and bury less awkward checks in page table walkers) or rip support > for MAP_PRIVATE PFNMAP out completely. > > > > > And, just curious: is there any use case you're aware of that can benefit > > from caring PRIVATE pfnmaps yet so far, especially in this path? > > In general MAP_PRIVATE pfnmaps is not really useful on things like MMIO. > > There was a discussion (in VM_PAT) some time ago whether we could remove > MAP_PRIVATE PFNMAPs completely [1]. At least some users still use COW > mappings on /dev/mem, although not many (and they might not actually write > to these areas). Ah, looks like the private map on /dev/mem is the only thing we know. > > I'm happy if someone wants to try ripping that out, I'm not brave enough :) > > [1] > https://lkml.kernel.org/r/1f2a8ed4-aaff-4be7-b3b6-63d2841a2908@redhat.com > > > > > As far as I read, none of folio_walk_start() users so far should even > > stumble on top of a pfnmap, share or private. But that's a fairly quick > > glimps only. > > do_pages_stat()->do_pages_stat_array() should be able to trigger it, if you > pass "nodes=NULL" to move_pages(). .. so assume this is also about private mapping over /dev/mem, then: someone tries to write some pages there to some MMIO regions, then tries to use move_pages() to fetch which node those pages locate? Hmm.. OK :) > > Maybe s390x could be tricked into it, but likely as you say, most code > shouldn't trigger it. The function itself should be handling it correctly as > of today, though. So indeed I cannot justify it won't be used, and it's not a huge deal indeed if we stick with special bits. Let me go with that in the next version for folio_walk_start(). Thanks, -- Peter Xu