From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 4880CCD4857 for ; Wed, 4 Sep 2024 15:43:31 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Transfer-Encoding: Content-Type:In-Reply-To:From:References:Cc:To:Subject:MIME-Version:Date: Message-ID:Reply-To:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=G/XXO8F1JGhP9Jx0YJK3G2Z5dRrSxaNRVFNz0riLUIE=; b=TxstHHHzcxEW72nHNTdZ4kBzGs SMYy5OszzRwTL2Cq7ByTdQvm1h4V+ef8xj5g67ksTG5ILzQ46jyIssVFAVW8AHpkyzyP5JyAnt8Bm 4yG3nPsrmbW3EOPBB9A8Z+qf6W8kLoeiDvLkJICRhlX2rQ+eNN3aF4MhkGTMUhfN9T9eK1YONJ96s GrTYbE8n8+EHXEAypHiKYTsahYm0rNcI6nML50E+Aw59kllo8u0Ae2XKfnU9OSFpiHRPBF5ztBoOv pMoqaNEGUO3UPNajOXt9EAdzrZe1Or2XUOyTPqryiGD3WcuvZhefsqe338f8oDA+tzrB3MJi8MrGB SLBRli0A==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1slsAB-0000000548w-3wvb; Wed, 04 Sep 2024 15:43:19 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1sls96-000000053jG-3qA1 for linux-arm-kernel@lists.infradead.org; Wed, 04 Sep 2024 15:42:14 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 7865BFEC; Wed, 4 Sep 2024 08:42:35 -0700 (PDT) Received: from [10.163.62.239] (unknown [10.163.62.239]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 72F283F73F; Wed, 4 Sep 2024 08:42:00 -0700 (PDT) Message-ID: <98f73b2f-ec1c-4e81-bfb2-6e02ebc4cdae@arm.com> Date: Wed, 4 Sep 2024 21:11:57 +0530 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH v2 0/2] Do not shatter hugezeropage on wp-fault To: Ryan Roberts , akpm@linux-foundation.org, david@redhat.com, willy@infradead.org, kirill.shutemov@linux.intel.com Cc: anshuman.khandual@arm.com, catalin.marinas@arm.com, cl@gentwo.org, vbabka@suse.cz, mhocko@suse.com, apopple@nvidia.com, dave.hansen@linux.intel.com, will@kernel.org, baohua@kernel.org, jack@suse.cz, mark.rutland@arm.com, hughd@google.com, aneesh.kumar@kernel.org, yang@os.amperecomputing.com, peterx@redhat.com, ioworker0@gmail.com, jglisse@google.com, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org References: <20240904100923.290042-1-dev.jain@arm.com> <2427338d-7be5-4939-8d01-6d99b9167fea@arm.com> Content-Language: en-US From: Dev Jain In-Reply-To: <2427338d-7be5-4939-8d01-6d99b9167fea@arm.com> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240904_084213_038498_C6AE439A X-CRM114-Status: GOOD ( 15.61 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org On 9/4/24 17:06, Ryan Roberts wrote: > Hi Dev, > > On 04/09/2024 11:09, Dev Jain wrote: >> It was observed at [1] and [2] that the current kernel behaviour of >> shattering a hugezeropage is inconsistent and suboptimal. For a VMA with >> a THP allowable order, when we write-fault on it, the kernel installs a >> PMD-mapped THP. On the other hand, if we first get a read fault, we get >> a PMD pointing to the hugezeropage; subsequent write will trigger a >> write-protection fault, shattering the hugezeropage into one writable >> page, and all the other PTEs write-protected. The conclusion being, as >> compared to the case of a single write-fault, applications have to suffer >> 512 extra page faults if they were to use the VMA as such, plus we get >> the overhead of khugepaged trying to replace that area with a THP anyway. >> >> Instead, replace the hugezeropage with a THP on wp-fault. >> >> v1->v2: >> - Wrap do_huge_zero_wp_pmd_locked() around lock and unlock >> - Call thp_fault_alloc() before do_huge_zero_wp_pmd_locked() to avoid >> - calling sleeping function from spinlock context >> >> [1]: https://lore.kernel.org/all/3743d7e1-0b79-4eaf-82d5-d1ca29fe347d@arm.com/ >> [2]: https://lore.kernel.org/all/1cfae0c0-96a2-4308-9c62-f7a640520242@arm.com/ >> >> Dev Jain (2): >> mm: Abstract THP allocation >> mm: Allocate THP on hugezeropage wp-fault >> >> include/linux/huge_mm.h | 6 ++ >> mm/huge_memory.c | 171 +++++++++++++++++++++++++++++----------- >> mm/memory.c | 5 +- >> 3 files changed, 136 insertions(+), 46 deletions(-) >> > What is the base for this? It doesn't apply on top of mm-unstable. Sorry, forgot to mention, it applies on v6.11-rc5. > > Thanks, > Ryan >