From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 88B4BC282D1 for ; Mon, 3 Mar 2025 16:30:48 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 1947C280006; Mon, 3 Mar 2025 11:30:48 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 1460A280003; Mon, 3 Mar 2025 11:30:48 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id F01AB280006; Mon, 3 Mar 2025 11:30:47 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 8BEFA280003 for ; Mon, 3 Mar 2025 11:30:47 -0500 (EST) Received: from smtpin19.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id D79E1120C34 for ; Mon, 3 Mar 2025 16:30:46 +0000 (UTC) X-FDA: 83180778492.19.4F3C31C Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by imf28.hostedemail.com (Postfix) with ESMTP id 9A091C002B for ; Mon, 3 Mar 2025 16:30:44 +0000 (UTC) Authentication-Results: imf28.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=f6o4DVPf; spf=pass (imf28.hostedemail.com: domain of dhildenb@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=dhildenb@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1741019444; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=5CNQID9OyUlmGzlaLMJyJN9WCGZqSjZdKvQcCGQkCwE=; b=D9UAHU4MFR0hSuk+i3P6XqRldvFG3X7jZIr1+hJa/yNzyWKuQBfrLOhhX58GQsb0gfh4Wn C+ZaWSbR0tKUnUTqpL55eN+znXFarbPc0WmxIW1n6wkm9eHBRivqG5CILFwZuavR4q0Qpa E44HGMAljYwFSaajrcPuJleTdS4IUwI= ARC-Authentication-Results: i=1; imf28.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=f6o4DVPf; spf=pass (imf28.hostedemail.com: domain of dhildenb@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=dhildenb@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1741019444; a=rsa-sha256; cv=none; b=kBYyvbKyso1Jd9ds0w/PP6aGfZI+9yIFu6DVekOf5zwfMhwZFOfeQzZXJ+d9pF37Ck2M5V F74OsHPNsshSHNobdQ0llDsZblKu6cRA0L+l5LuX+PKLFDNOCVlUVRJ6v//OLNz9p1VmHN VzM3/a3t4WFbgSNe1r4OtOROlMRwHcs= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1741019443; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=5CNQID9OyUlmGzlaLMJyJN9WCGZqSjZdKvQcCGQkCwE=; b=f6o4DVPft8VhQ8iuHgeN76kz28ChcBpdoGlbyIvoL/J+sU3QBdB0SM2ue6x/+3tVwQeTDy lm8TdcXgrmH60c8k/qXNt4hY2DgYX92cfZyi5cB53WpWwPFuiXE7s9vcDZf8v8N+CK1pii K//Som96Yy5zUQ851UKL3Sj3PkPaWLI= Received: from mail-wm1-f70.google.com (mail-wm1-f70.google.com [209.85.128.70]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-609-Sn3AMhINPdKZfu0e7L31lg-1; Mon, 03 Mar 2025 11:30:32 -0500 X-MC-Unique: Sn3AMhINPdKZfu0e7L31lg-1 X-Mimecast-MFC-AGG-ID: Sn3AMhINPdKZfu0e7L31lg_1741019430 Received: by mail-wm1-f70.google.com with SMTP id 5b1f17b1804b1-4393e89e910so26048655e9.0 for ; Mon, 03 Mar 2025 08:30:32 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1741019430; x=1741624230; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=5CNQID9OyUlmGzlaLMJyJN9WCGZqSjZdKvQcCGQkCwE=; b=Rvexw4+I5IsB3Zdb/G/YJ6LlYrxbTjbOZe9qJS73gC5a8+gdyK6+RfxOgTA1m84tUG 5UL1wgF5KGVY4GR1sqwI+isrvL75WkNCoZ9+2mIoqe/5+ki35mhLN6qDWqGKnHHC7A7W x2YlxKBnt0Cdd9Rz2FLDNG78wyEwzkpKwusdERn8ZEbs+CSNG2R2FyrMN17N1AIoMbVA c6nVk1eTNfWSDw+Ahu6OVVdwA3GbZ+lyuHY9itmF8L4fyPfH6BPdnHklvqgqGGZ6yat/ EQ9u8957ok4yoPi6+guAj4rsa8kjeuPCpQ8OMDUZffvy6GOltj+yPhDzLvbmYqhCnN0n WreA== X-Forwarded-Encrypted: i=1; AJvYcCWsygNIUZZsT3MSlh+F1Dq+sEMBHl+kOs4lTx58sEkYgUotQxrq5cQIj5ucigpWzcJOLoJOjr1aBA==@kvack.org X-Gm-Message-State: AOJu0YxoiOYySt7vEg17MZDbmwk5NA5fC3Jj/ZHfaAbuFhBkJW2LhTaw WtiOkq8M7ystyAd19Y6HfPSl1RAIrokE9dzUmCMavoc16vAHqe0Yn87l8zA1U3gz52kSuO0YjuL Dr10GYIIE7i41TMmu4Nc7adxt+ouVhn10IYoMb0lqlzC/e6TB X-Gm-Gg: ASbGncv1hukfAsrkFGqBjcbl/DGaJQvlr9gbANfm+0FomVjSDQB+k0/qqwlOKnmta8c Eo4S2Om0mMzLHffZxblw+9Qz9TkI1aJUKqpM5IAWCn7qp/jRHmEzVZAylZYoU0oQ13yepr7niTx 8RVfDSTRlU+YD0xxk/96yNGg5a+3yXc0q9uGeYA8gVwOZxRSmpU4PcRvulCwKx3x2r1VzQ/+0CN aAUGZqupVP2VQ6nAePtlEf1Z8lNn7YqkzcJIYBVWZZtUF6WIbpEnm/rX93fPaAzxdsUoWF+Rwu5 lyihRb4yCtxMkf8ojdeQORABLoaaiYIc2eYtmjV+xXhyFVAOx6s1tPtNIw4Db09+OEHElz8JRI/ n X-Received: by 2002:a5d:6da3:0:b0:38d:e48b:1766 with SMTP id ffacd0b85a97d-390ec7c67c4mr11859523f8f.6.1741019430372; Mon, 03 Mar 2025 08:30:30 -0800 (PST) X-Google-Smtp-Source: AGHT+IHRquSNZRyjpwgx3Bpqn4SKjwIglVAovJ799aL6PA2lqrvYWWUmjvtNcaSqxs0FHQnHdYBh0w== X-Received: by 2002:a5d:6da3:0:b0:38d:e48b:1766 with SMTP id ffacd0b85a97d-390ec7c67c4mr11859482f8f.6.1741019430024; Mon, 03 Mar 2025 08:30:30 -0800 (PST) Received: from localhost (p200300cbc7349600af274326a2162bfb.dip0.t-ipconnect.de. [2003:cb:c734:9600:af27:4326:a216:2bfb]) by smtp.gmail.com with UTF8SMTPSA id 5b1f17b1804b1-43bc4478f70sm23988695e9.0.2025.03.03.08.30.28 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Mon, 03 Mar 2025 08:30:29 -0800 (PST) From: David Hildenbrand To: linux-kernel@vger.kernel.org Cc: linux-doc@vger.kernel.org, cgroups@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-api@vger.kernel.org, David Hildenbrand , Andrew Morton , "Matthew Wilcox (Oracle)" , Tejun Heo , Zefan Li , Johannes Weiner , =?UTF-8?q?Michal=20Koutn=C3=BD?= , Jonathan Corbet , Andy Lutomirski , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , Muchun Song , "Liam R. Howlett" , Lorenzo Stoakes , Vlastimil Babka , Jann Horn Subject: [PATCH v3 06/20] mm: move _entire_mapcount in folio to page[2] on 32bit Date: Mon, 3 Mar 2025 17:29:59 +0100 Message-ID: <20250303163014.1128035-7-david@redhat.com> X-Mailer: git-send-email 2.48.1 In-Reply-To: <20250303163014.1128035-1-david@redhat.com> References: <20250303163014.1128035-1-david@redhat.com> MIME-Version: 1.0 X-Mimecast-Spam-Score: 0 X-Mimecast-MFC-PROC-ID: WXwzAUG6XIrRW-32m-wQt0U-IIAI92N8ab_6mANhp7U_1741019430 X-Mimecast-Originator: redhat.com Content-Transfer-Encoding: 8bit content-type: text/plain; charset="US-ASCII"; x-default=true X-Stat-Signature: 6zgwxqpy6hfoqa689o3119nyu7xkjmsd X-Rspamd-Queue-Id: 9A091C002B X-Rspamd-Server: rspam06 X-Rspam-User: X-HE-Tag: 1741019444-795831 X-HE-Meta: U2FsdGVkX19xEItDmAtPAypHDZa2exaEF5VEd326SsvLE8RYaulanQ1kxOfXfRDRgwuitcOEQor6c2XQ/N935pPH1D/trOg6raVppZ9VRXW1/JZvedvBiCRwG0EX3Tm13utFm/ZKDuF2fY5c4mAbWwJJgi24xzOCbobd/WXcCMTkIynvNcWWgmHy+7iU+urO9u6zpy2iNWdwdT5X6kOpugpKaQQ9iMG+RKtgH+G8UaY1XUavwkXXNYz3K6xAPj3gKSyEY03xMeiYvLdvvP1REte2WuLi4ZjAkvtORc1BZUivhUlIPvgG7/gbeUUkmB3cIP/LvsjuQbctrYoR5WzX+l+WdR5qqjpNShBkzbu/j1IHK5nXqN2tZachHbLpe24XonGSH11Gi4ChSxCCKbmPmPzisRtE4iFXKoE+JMKoXBo5BEsJn++VWbcl38jvPxoUqfapuSPAIhuEhJd+w2cmtwfLylbzykLDcBFBmHv4YwLZGBxm9QJIScvoavYFYM0Ja3EriDoC8xH00tB5e7FiUWdWHv9XMxID7qCLYjYI7jsS9QZahk/J/d0xc+HGRuF6fa0h8uixBT9nHGw93rLSWffZKQ6hJo2lhOoi3JvBI22wFIgehmyKqoqdPvQs6dQuq9SnwssMXqPjK5mba63x12cwuBzTkAduRjItxRLC99l5Nlvk7P8P+epk0tj/qEONBILc2PdCV7ULfTAToGFUQHO1t9P8lJlAIcyM9ljYXEf9IRbfq3c3uXJ/OzuPnGIAQWXAqp3KSe5W/2EqRGoWYXxCA2XHCVyWkd3IgCOCADxVsBY3IFHTPg8FgYA6IB1+forQEOWb0OuD40RHysKoOilJuTDN74dljuyxNhRscncijaaE2XXyS8pkRMZpRm0/szCiXQ8v7Nr9d5hXr2qXdaSTOXQkvVSlciZVsOMcZEia4pEAbsnFeFqVfKYHnnlMaV2FdaWZZ+NNuO1LrJI +/yu86cD rRYfjNuMFrHbl0KVI+Lv+n1DdQ2hSibkzJA+XIz2PKgPgf46MplEFBriObdzElmruoPKWntNzrVu3X220HTEHyz1UJiWB8uO4aLwriv+UI/+XaU11N7q0Wi7nIqlQ4S8b1/zOBO1lPTHhjef/hdTj4eVvTdOo76mM9C95uD/dZG0Zdz724YyY3v+RSNMtOw4lMHvw/rPJIQkTOQTLtMXowkjCX8jb+Ealy/rrWZ7UN+wlBmuSgV1CVt+SRX9C+KJU0min+o7Ck9tj1D7bku6rL8SSELvc0YfQG/M+bLbHTya9S3qOrT5FQDLAi4goCIC5PSVoCSt2lKFqILxhqa0ETND20GuQa3hMdmxpHl9W186bIO11IjDT8Ox6o+trzRIwU9HpdAqUft84ZHqyRajLEvpn/D6vpij6TD9kss0v/JgqjFrlM6wybm0mdMX7Pj6j6i5tHlVDkgR0a3PU41Q2yYCRWKDsBUiSeQ/3cy+Js4JCsT0wNzHy3QkIWTFoNcIEWVEfpWWqymPoQjzzPuDN5+iP9Q== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Let's free up some space on 32bit in page[1] by moving the _pincount to page[2]. Ordinary folios only use the entire mapcount with PMD mappings, so order-1 folios don't apply. Similarly, hugetlb folios are always larger than order-1, turning the entire mapcount essentially unused for all order-1 folios. Moving it to order-1 folios will not change anything. On 32bit, simply check in folio_entire_mapcount() whether we have an order-1 folio, and return 0 in that case. Note that THPs on 32bit are not particularly common (and we don't care too much about performance), but we want to keep it working reliably, because likely we want to use large folios there as well in the future, independent of PMD leaf support. Once we dynamically allocate "struct folio", the 32bit specifics will go away again; even small folios could then have a pincount. Signed-off-by: David Hildenbrand --- include/linux/mm.h | 2 ++ include/linux/mm_types.h | 3 ++- mm/internal.h | 5 +++-- mm/page_alloc.c | 12 ++++++++---- 4 files changed, 15 insertions(+), 7 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index c1414491c0de2..53dd4f99fdabc 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1333,6 +1333,8 @@ static inline int is_vmalloc_or_module_addr(const void *x) static inline int folio_entire_mapcount(const struct folio *folio) { VM_BUG_ON_FOLIO(!folio_test_large(folio), folio); + if (!IS_ENABLED(CONFIG_64BIT) && unlikely(folio_large_order(folio) == 1)) + return 0; return atomic_read(&folio->_entire_mapcount) + 1; } diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index 31f466d8485bc..c83dd2f1ee25e 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -385,9 +385,9 @@ struct folio { union { struct { atomic_t _large_mapcount; - atomic_t _entire_mapcount; atomic_t _nr_pages_mapped; #ifdef CONFIG_64BIT + atomic_t _entire_mapcount; atomic_t _pincount; #endif /* CONFIG_64BIT */ }; @@ -409,6 +409,7 @@ struct folio { /* public: */ struct list_head _deferred_list; #ifndef CONFIG_64BIT + atomic_t _entire_mapcount; atomic_t _pincount; #endif /* !CONFIG_64BIT */ /* private: the union with struct page is transitional */ diff --git a/mm/internal.h b/mm/internal.h index 378464246f259..9860e65ffc945 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -719,10 +719,11 @@ static inline void prep_compound_head(struct page *page, unsigned int order) folio_set_order(folio, order); atomic_set(&folio->_large_mapcount, -1); - atomic_set(&folio->_entire_mapcount, -1); atomic_set(&folio->_nr_pages_mapped, 0); - if (IS_ENABLED(CONFIG_64BIT) || order > 1) + if (IS_ENABLED(CONFIG_64BIT) || order > 1) { atomic_set(&folio->_pincount, 0); + atomic_set(&folio->_entire_mapcount, -1); + } if (order > 1) INIT_LIST_HEAD(&folio->_deferred_list); } diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 594a552c735cd..b0739baf7b07f 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -951,10 +951,6 @@ static int free_tail_page_prepare(struct page *head_page, struct page *page) switch (page - head_page) { case 1: /* the first tail page: these may be in place of ->mapping */ - if (unlikely(folio_entire_mapcount(folio))) { - bad_page(page, "nonzero entire_mapcount"); - goto out; - } if (unlikely(folio_large_mapcount(folio))) { bad_page(page, "nonzero large_mapcount"); goto out; @@ -964,6 +960,10 @@ static int free_tail_page_prepare(struct page *head_page, struct page *page) goto out; } if (IS_ENABLED(CONFIG_64BIT)) { + if (unlikely(atomic_read(&folio->_entire_mapcount) + 1)) { + bad_page(page, "nonzero entire_mapcount"); + goto out; + } if (unlikely(atomic_read(&folio->_pincount))) { bad_page(page, "nonzero pincount"); goto out; @@ -977,6 +977,10 @@ static int free_tail_page_prepare(struct page *head_page, struct page *page) goto out; } if (!IS_ENABLED(CONFIG_64BIT)) { + if (unlikely(atomic_read(&folio->_entire_mapcount) + 1)) { + bad_page(page, "nonzero entire_mapcount"); + goto out; + } if (unlikely(atomic_read(&folio->_pincount))) { bad_page(page, "nonzero pincount"); goto out; -- 2.48.1