From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 59F6936309E for ; Tue, 21 Apr 2026 13:39:34 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1776778774; cv=none; b=U2gRXD0yASEtYRZKWrNoH3bpC8lG2nNV+qTJyKzB73woUCQ4GEThmB2t01gLukXMLoCEaTSrk7MvaNV0aH3WMYTSxlTITQMgfYJFeCmO3x0KAVO6yFuXu3EEV8hTNHd+SHJeBho7MXGyep8GH+HdJbVb7hsFWEU0IfkUQyT8+TQ= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1776778774; c=relaxed/simple; bh=lhzhVWrfpSh5Qp+ByIaNqHmS1c8vpJzymZofhRxC3zs=; h=Message-ID:Date:MIME-Version:Subject:To:Cc:References:From: In-Reply-To:Content-Type; b=ED4RWiMJIo0sZ8bMTMLCYmY6Mq3UikcHzVCasKIIIp6e4FG8vrDboLl+jiimyH7jrPRhFqTKS5r8BmN5LQip7yNepKrTghI1z6u7UpKWou9y0FhyvW/RdVbSwFYLXpD6DjARYQPxsPnWObnH83fhVClSqA/b00gX4gcDJXjzkBk= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=SpHohVth; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="SpHohVth" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 3A332C2BCB0; Tue, 21 Apr 2026 13:39:30 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1776778773; bh=lhzhVWrfpSh5Qp+ByIaNqHmS1c8vpJzymZofhRxC3zs=; h=Date:Subject:To:Cc:References:From:In-Reply-To:From; b=SpHohVthriLmML5X8wXfsGrBMIj7r9VtWS9QHkajJ745xVMhM1iUGWt49vZOhjCGK P19Cg5oCFQurRWOLeXVMx++r/4EeOzZpsEAOoGSXqpg9oRFJaCvfTRhYHb1YgNFE2m Hl7Z4diYbrOkaTL4QCWwcl3Q3IJGUCGWKEj/ulEdK9cXD7hBGfcZ6q2iWp3FZONyde 3N6qXtXxOwZAuF0feZu3g7zP+w+IDtTM0UmrNSG2AhQNA6QKsow/LJfv8AVZkVVTNp vCEI/de0I0QHejZU6iv3oPvlO21hAW4WFyKTFiOkzVh09xPzIK6Gnja/R15j6+ah4k 1PdueL/If+YHg== Message-ID: Date: Tue, 21 Apr 2026 15:39:29 +0200 Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH v3] mm: shmem: always support large folios for internal shmem mount To: Baolin Wang , akpm@linux-foundation.org, hughd@google.com Cc: willy@infradead.org, ziy@nvidia.com, ljs@kernel.org, lance.yang@linux.dev, linux-mm@kvack.org, linux-kernel@vger.kernel.org References: <26f954be62348591e720c4e8b7a9099b74dc1d6d.1776331555.git.baolin.wang@linux.alibaba.com> <1b3c0401-6d10-4a28-97c8-8e3858d8dc3d@kernel.org> <015de194-99b9-4f9e-8c89-d35807c6fd08@linux.alibaba.com> <07e26d39-6155-4661-b3df-c2419535ed43@kernel.org> From: "David Hildenbrand (Arm)" Content-Language: en-US Autocrypt: addr=david@kernel.org; keydata= xsFNBFXLn5EBEAC+zYvAFJxCBY9Tr1xZgcESmxVNI/0ffzE/ZQOiHJl6mGkmA1R7/uUpiCjJ dBrn+lhhOYjjNefFQou6478faXE6o2AhmebqT4KiQoUQFV4R7y1KMEKoSyy8hQaK1umALTdL QZLQMzNE74ap+GDK0wnacPQFpcG1AE9RMq3aeErY5tujekBS32jfC/7AnH7I0v1v1TbbK3Gp XNeiN4QroO+5qaSr0ID2sz5jtBLRb15RMre27E1ImpaIv2Jw8NJgW0k/D1RyKCwaTsgRdwuK Kx/Y91XuSBdz0uOyU/S8kM1+ag0wvsGlpBVxRR/xw/E8M7TEwuCZQArqqTCmkG6HGcXFT0V9 PXFNNgV5jXMQRwU0O/ztJIQqsE5LsUomE//bLwzj9IVsaQpKDqW6TAPjcdBDPLHvriq7kGjt WhVhdl0qEYB8lkBEU7V2Yb+SYhmhpDrti9Fq1EsmhiHSkxJcGREoMK/63r9WLZYI3+4W2rAc UucZa4OT27U5ZISjNg3Ev0rxU5UH2/pT4wJCfxwocmqaRr6UYmrtZmND89X0KigoFD/XSeVv jwBRNjPAubK9/k5NoRrYqztM9W6sJqrH8+UWZ1Idd/DdmogJh0gNC0+N42Za9yBRURfIdKSb B3JfpUqcWwE7vUaYrHG1nw54pLUoPG6sAA7Mehl3nd4pZUALHwARAQABzS5EYXZpZCBIaWxk ZW5icmFuZCAoQ3VycmVudCkgPGRhdmlkQGtlcm5lbC5vcmc+wsGQBBMBCAA6AhsDBQkmWAik AgsJBBUKCQgCFgICHgUCF4AWIQQb2cqtc1xMOkYN/MpN3hD3AP+DWgUCaYJt/AIZAQAKCRBN 3hD3AP+DWriiD/9BLGEKG+N8L2AXhikJg6YmXom9ytRwPqDgpHpVg2xdhopoWdMRXjzOrIKD g4LSnFaKneQD0hZhoArEeamG5tyo32xoRsPwkbpIzL0OKSZ8G6mVbFGpjmyDLQCAxteXCLXz ZI0VbsuJKelYnKcXWOIndOrNRvE5eoOfTt2XfBnAapxMYY2IsV+qaUXlO63GgfIOg8RBaj7x 3NxkI3rV0SHhI4GU9K6jCvGghxeS1QX6L/XI9mfAYaIwGy5B68kF26piAVYv/QZDEVIpo3t7 /fjSpxKT8plJH6rhhR0epy8dWRHk3qT5tk2P85twasdloWtkMZ7FsCJRKWscm1BLpsDn6EQ4 jeMHECiY9kGKKi8dQpv3FRyo2QApZ49NNDbwcR0ZndK0XFo15iH708H5Qja/8TuXCwnPWAcJ DQoNIDFyaxe26Rx3ZwUkRALa3iPcVjE0//TrQ4KnFf+lMBSrS33xDDBfevW9+Dk6IISmDH1R HFq2jpkN+FX/PE8eVhV68B2DsAPZ5rUwyCKUXPTJ/irrCCmAAb5Jpv11S7hUSpqtM/6oVESC 3z/7CzrVtRODzLtNgV4r5EI+wAv/3PgJLlMwgJM90Fb3CB2IgbxhjvmB1WNdvXACVydx55V7 LPPKodSTF29rlnQAf9HLgCphuuSrrPn5VQDaYZl4N/7zc2wcWM7BTQRVy5+RARAA59fefSDR 9nMGCb9LbMX+TFAoIQo/wgP5XPyzLYakO+94GrgfZjfhdaxPXMsl2+o8jhp/hlIzG56taNdt VZtPp3ih1AgbR8rHgXw1xwOpuAd5lE1qNd54ndHuADO9a9A0vPimIes78Hi1/yy+ZEEvRkHk /kDa6F3AtTc1m4rbbOk2fiKzzsE9YXweFjQvl9p+AMw6qd/iC4lUk9g0+FQXNdRs+o4o6Qvy iOQJfGQ4UcBuOy1IrkJrd8qq5jet1fcM2j4QvsW8CLDWZS1L7kZ5gT5EycMKxUWb8LuRjxzZ 3QY1aQH2kkzn6acigU3HLtgFyV1gBNV44ehjgvJpRY2cC8VhanTx0dZ9mj1YKIky5N+C0f21 zvntBqcxV0+3p8MrxRRcgEtDZNav+xAoT3G0W4SahAaUTWXpsZoOecwtxi74CyneQNPTDjNg azHmvpdBVEfj7k3p4dmJp5i0U66Onmf6mMFpArvBRSMOKU9DlAzMi4IvhiNWjKVaIE2Se9BY FdKVAJaZq85P2y20ZBd08ILnKcj7XKZkLU5FkoA0udEBvQ0f9QLNyyy3DZMCQWcwRuj1m73D sq8DEFBdZ5eEkj1dCyx+t/ga6x2rHyc8Sl86oK1tvAkwBNsfKou3v+jP/l14a7DGBvrmlYjO 59o3t6inu6H7pt7OL6u6BQj7DoMAEQEAAcLBfAQYAQgAJgIbDBYhBBvZyq1zXEw6Rg38yk3e EPcA/4NaBQJonNqrBQkmWAihAAoJEE3eEPcA/4NaKtMQALAJ8PzprBEXbXcEXwDKQu+P/vts IfUb1UNMfMV76BicGa5NCZnJNQASDP/+bFg6O3gx5NbhHHPeaWz/VxlOmYHokHodOvtL0WCC 8A5PEP8tOk6029Z+J+xUcMrJClNVFpzVvOpb1lCbhjwAV465Hy+NUSbbUiRxdzNQtLtgZzOV Zw7jxUCs4UUZLQTCuBpFgb15bBxYZ/BL9MbzxPxvfUQIPbnzQMcqtpUs21CMK2PdfCh5c4gS sDci6D5/ZIBw94UQWmGpM/O1ilGXde2ZzzGYl64glmccD8e87OnEgKnH3FbnJnT4iJchtSvx yJNi1+t0+qDti4m88+/9IuPqCKb6Stl+s2dnLtJNrjXBGJtsQG/sRpqsJz5x1/2nPJSRMsx9 5YfqbdrJSOFXDzZ8/r82HgQEtUvlSXNaXCa95ez0UkOG7+bDm2b3s0XahBQeLVCH0mw3RAQg r7xDAYKIrAwfHHmMTnBQDPJwVqxJjVNr7yBic4yfzVWGCGNE4DnOW0vcIeoyhy9vnIa3w1uZ 3iyY2Nsd7JxfKu1PRhCGwXzRw5TlfEsoRI7V9A8isUCoqE2Dzh3FvYHVeX4Us+bRL/oqareJ CIFqgYMyvHj7Q06kTKmauOe4Nf0l0qEkIuIzfoLJ3qr5UyXc2hLtWyT9Ir+lYlX9efqh7mOY qIws/H2t In-Reply-To: Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit On 4/21/26 08:27, Baolin Wang wrote: > > > On 4/21/26 3:00 AM, David Hildenbrand (Arm) wrote: >> On 4/17/26 14:45, Baolin Wang wrote: >>> >>> >>> >>> Indeed. Good point. >>> >>> >>> Not really. There could be files created before remount whose mappings >>> don't support large folios (with 'huge=never' option), while files >>> created after remount will have mappings that support large folios (if >>> remounted with 'huge=always' option). >>> >>> It looks like the previous commit 5a90c155defa was also problematic. The >>> huge mount option has introduced a lot of tricky issues:( >>> >>> Now I think Zi's previous suggestion should be able to clean up this >>> mess? That is, calling mapping_set_large_folios() unconditionally for >>> all shmem mounts, and revisiting Kefeng's first version to fix the >>> performance issue. >> >> Okay, so you'll send a patch to just set mapping_set_large_folios() >> unconditionally? > > I'm still hesitating on this. If we set mapping_set_large_folios() > unconditionally, we need to re-fix the performance regression that was > addressed by commit 5a90c155defa. Just so I can follow: where is the test for large folios that we would unlock large folios and cause a regression? > > But it's hard for me to convince myself to add a new flag similar to > IOCB_NO_LARGE_CHUNK for this hack (like the patch in [1] does). > >>> [1] https://lore.kernel.org/all/20240914140613.2334139-1- >>> wangkefeng.wang@huawei.com/ >> >> Is that really required? Which call path would be the problematic bit >> with the above? >> >> I'd say, we'd check in the large folio allocation code whether ->huge is >> set to never instead? > > Yes, this is exactly our current logic. When allocating large folios, > we'll check the ->huge setting in shmem_huge_global_enabled(), which > means large folio allocations always respect the ->huge setting. Makes sense. > > But as I mentioned earlier, the ->huge setting cannot keep the > mapping_set_large_folios() setting consistent across all mappings in the > entire tmpfs mount. My concern is that under the same tmpfs mount, after > remount, we might end up with some mappings supporting large folios > (calling mapping_set_large_folios()) while others don't. If we at least always set mapping_set_large_folios(), then there is no inconsistency in that regard :) > > However, I got some insights from Documentation/admin-guide/mm/ > transhuge.rst. Does this mean that after remount, whether the mappings > of existing files support large folios should remain unchanged? That's the current behavior, right? > > “ > ``mount -o remount,huge= /mountpoint`` works fine after mount: > remounting ``huge=never`` will not attempt to break up huge pages at > all, just stop more from being allocated. > ” > > Do you think this makes sense? I suspect that matches existing behavior, so it should be fine. -- Cheers, David