From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 25470E94114 for ; Fri, 6 Oct 2023 20:07:08 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:Content-Type: Content-Transfer-Encoding:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:In-Reply-To:Subject:From:References:Cc:To: MIME-Version:Date:Message-ID:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=z5qdvLAd6aY6/VyMDzgYE6wZMadRBJRMTusR5LBQGr0=; b=rYtaYXcsndi9MO zSx9zTE665yi03C007Ts9I50Nt1/diMXh+x6YcTOSS883sWUBxRpQKLEP7ot0+BOQl/RV9V2D9hb5 ueVm7coGvjMFc+qysPgvsb3agay5yNjhHIPuiHHa6ClbEYOoua/nS56BFT9cyc7wQau/lRWEsQZgE Qigfu5sj90K7y/iV0xSFMm67c6VdEbL0fts74SseGp5DwPv9xVwY/UvmBU2Oj5B/jhSIRBxPztFwB svMEPbpPkzyRV6CPFUHLK3nsJY9WqgCA05sRYgcIG7Ey3jsoOk2+tGqPJE+/Zl6y9n4qwHFuK0NIL QkF1Ih6BKa2o2H8GxSHA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qor5n-006QSu-2W; Fri, 06 Oct 2023 20:06:35 +0000 Received: from us-smtp-delivery-124.mimecast.com ([170.10.133.124]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1qor5k-006QS7-0b for linux-arm-kernel@lists.infradead.org; Fri, 06 Oct 2023 20:06:34 +0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1696622791; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=2pNqmbK30SfljLBAy/RLPHu30QLqa3v+lOBaHfBrwSc=; b=XNasbBXTTde7oem4DVicuHCybewImaQl1wE466LPpP3dkvEnde+ZVYv/nFO+Bk5+01Ymjm zoh3yT40iduDd0xwRnx7HkpLgUstsIctnoqn8yAHsSwqJXZHWCQVQX1Ler6ZGurg9r4ke1 UfA8oYO0XQKIyoWknLczRXB10lQ5cJQ= Received: from mail-wm1-f72.google.com (mail-wm1-f72.google.com [209.85.128.72]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-48-cXsv9W0ZOAq4O4NeBJCkDw-1; Fri, 06 Oct 2023 16:06:24 -0400 X-MC-Unique: cXsv9W0ZOAq4O4NeBJCkDw-1 Received: by mail-wm1-f72.google.com with SMTP id 5b1f17b1804b1-3fef5403093so12340845e9.0 for ; Fri, 06 Oct 2023 13:06:24 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1696622783; x=1697227583; h=content-transfer-encoding:in-reply-to:subject:organization:from :references:cc:to:content-language:user-agent:mime-version:date :message-id:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=2pNqmbK30SfljLBAy/RLPHu30QLqa3v+lOBaHfBrwSc=; b=ecWjLKCaT0wtjDRGlK7OVp+jSawpGEk5LR5zeuChPWj+N8ACRZE8yQkOuRyF40WyDc 4gpXDIlsLMCO4H6ovpbzp2cTf+QsNRclMt9HhCGVgwYKuaeKXkRzokaeay3ZpMC0+R+a cMDxwVwr+jh/i72bALw58BtzzKIB8LJk+L2qlJlC3WTLrAlhXaxVZihvcN+XaEV+REwG F8HmajLq3gC2l1SKCwL97fKPYEu4hVg947Iik5zq3fLbhbv7TVTnRqR0DzyVuy7FTAFM JIALuMWs010a12ZXeksEv8TeLBgGvndC2tagewy3H88znnDuuaoN6gjYohsYNQ3gx4b5 kJpw== X-Gm-Message-State: AOJu0Yy2MWftAQiQebgLGh7B+rkhce3kYHX/GLfh6DY5YExhGjRA03RP XqxRjTuqgd8/39y2z2SpQAhjXH8TWyECguCPcVkulJcY2MMvyDDk0M7cPtRoyt4GEA76ECtS2IC CbStMqae4pcVsu7XabJV2eTgx4KPw6fFezaI= X-Received: by 2002:a05:600c:224d:b0:406:80b4:efd5 with SMTP id a13-20020a05600c224d00b0040680b4efd5mr5910075wmm.14.1696622783354; Fri, 06 Oct 2023 13:06:23 -0700 (PDT) X-Google-Smtp-Source: AGHT+IFVtdtIXkB99HLEvT5OLCfRrKm1R27RRmahSWx3yYs3WD5IkfkHtlye486IeL3QRpVuJOugng== X-Received: by 2002:a05:600c:224d:b0:406:80b4:efd5 with SMTP id a13-20020a05600c224d00b0040680b4efd5mr5910057wmm.14.1696622782865; Fri, 06 Oct 2023 13:06:22 -0700 (PDT) Received: from ?IPV6:2003:cb:c715:ee00:4e24:cf8e:3de0:8819? (p200300cbc715ee004e24cf8e3de08819.dip0.t-ipconnect.de. [2003:cb:c715:ee00:4e24:cf8e:3de0:8819]) by smtp.gmail.com with ESMTPSA id m10-20020a7bce0a000000b00405953973c3sm6639233wmc.6.2023.10.06.13.06.21 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Fri, 06 Oct 2023 13:06:22 -0700 (PDT) Message-ID: <6d89fdc9-ef55-d44e-bf12-fafff318aef8@redhat.com> Date: Fri, 6 Oct 2023 22:06:21 +0200 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101 Thunderbird/102.15.1 To: Ryan Roberts , Andrew Morton , Matthew Wilcox , Yin Fengwei , Yu Zhao , Catalin Marinas , Anshuman Khandual , Yang Shi , "Huang, Ying" , Zi Yan , Luis Chamberlain , Itaru Kitayama , "Kirill A. Shutemov" , John Hubbard , David Rientjes , Vlastimil Babka , Hugh Dickins Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org References: <20230929114421.3761121-1-ryan.roberts@arm.com> From: David Hildenbrand Organization: Red Hat Subject: Re: [PATCH v6 0/9] variable-order, large folios for anonymous memory In-Reply-To: <20230929114421.3761121-1-ryan.roberts@arm.com> X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Language: en-US X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20231006_130632_295121_F503E226 X-CRM114-Status: GOOD ( 36.91 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Transfer-Encoding: 7bit Content-Type: text/plain; charset="us-ascii"; Format="flowed" Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org On 29.09.23 13:44, Ryan Roberts wrote: > Hi All, Let me highlight some core decisions on the things discussed in the previous alignment meetings, and comment on them. > > This is v6 of a series to implement variable order, large folios for anonymous > memory. (previously called "ANON_LARGE_FOLIO", "LARGE_ANON_FOLIO", > "FLEXIBLE_THP", but now exposed as an extension to THP; "small-order THP"). The > objective of this is to improve performance by allocating larger chunks of > memory during anonymous page faults: Change number 1: Let's call these things THP. Fine with me; I previously rooted for that but was told that end users could be confused. I think the important bit is that we don't mess up the stats, and when we talk about THP we default to "PMD-sized THP", unless we explicitly include the other ones. I dislike exposing "orders" to the users, I'm happy to be convinced why I am wrong and it is a good idea. So maybe "Small THP"/"Small-sized THP" is better. Or "Medium-sized THP" -- as said, I think FreeBSD tends to call it "Medium-sized superpages". But what's small/medium/large is debatable. "Small" implies at least that it's smaller than what we used to know, which is a fact. Can we also now use the terminology consistently? (e.g., "variable-order, large folios for anonymous memory" -> "Small-sized anonymous THP", you can just point at the previous patch set name in the cover letter) > > 1) Since SW (the kernel) is dealing with larger chunks of memory than base > pages, there are efficiency savings to be had; fewer page faults, batched PTE > and RMAP manipulation, reduced lru list, etc. In short, we reduce kernel > overhead. This should benefit all architectures. > 2) Since we are now mapping physically contiguous chunks of memory, we can take > advantage of HW TLB compression techniques. A reduction in TLB pressure > speeds up kernel and user space. arm64 systems have 2 mechanisms to coalesce > TLB entries; "the contiguous bit" (architectural) and HPA (uarch). > > The major change in this revision is the addition of sysfs controls to allow > this "small-order THP" to be enabled/disabled/configured independently of > PMD-order THP. The approach I've taken differs a bit from previous discussions; > instead of creating a whole new interface ("large_folio"), I'm extending THP. I > personally think this makes things clearer and more extensible. See [6] for > detailed rationale. Change 2: sysfs interface. If we call it THP, it shall go under "/sys/kernel/mm/transparent_hugepage/", I agree. What we expose there and how, is TBD. Again, not a friend of "orders" and bitmaps at all. We can do better if we want to go down that path. Maybe we should take a look at hugetlb, and how they added support for multiple sizes. What *might* make sense could be (depending on which values we actually support!) /sys/kernel/mm/transparent_hugepage/hugepages-64kB/ /sys/kernel/mm/transparent_hugepage/hugepages-128kB/ /sys/kernel/mm/transparent_hugepage/hugepages-256kB/ /sys/kernel/mm/transparent_hugepage/hugepages-512kB/ /sys/kernel/mm/transparent_hugepage/hugepages-1024kB/ /sys/kernel/mm/transparent_hugepage/hugepages-2048kB/ Each one would contain an "enabled" and "defrag" file. We want something minimal first? Start with the "enabled" option. enabled: always [global] madvise never Initially, we would set it for PMD-sized THP to "global" and for everything else to "never". That sounds reasonable at least to me, and we would be using what we learned from THP (as John suggested). That still gives reasonable flexibility without going too wild, and a better IMHO interface. I understand Yu's point about ABI discussions and "0 knobs". I'm happy as long as we can have something that won't hurt us later and still be able to use this in distributions within a reasonable timeframe. Enabling/disabling individual sizes does not sound too restrictive to me. And we could always add an "auto" setting later and default to that with a new kconfig knob. If someone wants to configure it, why not. Let's just prepare a way to to handle this "better" automatically in the future (if ever ...). Change 3: Stats > /proc/meminfo: > Introduce new "AnonHugePteMap" field, which reports the amount of > memory (in KiB) mapped from large folios globally (similar to > AnonHugePages field). AnonHugePages is and remains "PMD-sized THP that is mapped using a PMD", I think we all agree on that. It should have been named "AnonPmdMapped" or "AnonHugePmdMapped", too bad, we can't change that. "AnonHugePteMap" better be "AnonHugePteMapped". But, I wonder if we want to expose this "PteMapped" to user space *at all*. Why should they care if it's PTE mapped? For PMD-sized THP it makes a bit of sense, because !PMD implied !performance, and one might have been able to troubleshoot that somehow. For PTE-mapped, it doesn't make much sense really, they are always PTE-mapped. That also raises the question how you would account a PTE-mapped THP. The hole thing? Only the parts that are mapped? Let's better not go down that path. That leaves the question why we would want to include them here at all in a special PTE-mapped way? Again, let's look at hugetlb: I prepared 1 GiB and one 2 MiB page. HugePages_Total: 1 HugePages_Free: 1 HugePages_Rsvd: 0 HugePages_Surp: 0 Hugepagesize: 2048 kB Hugetlb: 1050624 kB -> Only the last one gives the sum, the other stats don't even mention the other ones. [how do we get their stats, if at all?] So maybe, we only want a summary of how many anon huge pages of any size are allocated (independent of the PTE vs. PMD mapping), and some other source to eventually inspect how the different sizes behave. But note that for non-PMD-sized file THP we don't even have special counters! ... so maybe we should also defer any such stats and come up with something uniform for all types of non-PMD-sized THP. Sane discussion applies to all other stats. > > Because we now have runtime enable/disable control, I've removed the compile > time Kconfig switch. It still defaults to runtime-disabled. > > NOTE: These changes should not be merged until the prerequisites are complete. > These are in progress and tracked at [7]. We should probably list them here, and classify which one we see as strict a requirement, which ones might be an optimization. Now, these are just my thoughts, and I'm happy about other thoughts. -- Cheers, David / dhildenb _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel