From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 368A9C43334 for ; Wed, 22 Jun 2022 08:36:33 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:In-Reply-To:MIME-Version:References: Message-ID:Subject:Cc:To:From:Date:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=BU/QVWF1RPNWgBAnvQIHsVzfTCtmyI0jXBitu2EEANo=; b=a7ADE2Nzqe3lzV v6ad8ixw24ldAcU7L2M3Gz8w3K606e3p5o8zCmSdSECwGY1coHPUVlZlKiSRcmw56RnKRKS/n75tb rG+qmdlwCnBh7P+e+84P70DX4aAQanW/U/PxRbskaIJSZuL4asioSp6Ec1JuRLcgxv/tuw+TQ2WHH 2E52HHumBsWGg3TI7+9Y7RirjiEfhuhkte07gwAkpy31AWgaT8D46lL09eyuvDi3IfryZEdgPXD7n ZVQLksepYE/nRynhstsqB+ct9f8J3XmYGbCv4iDHZbcrWS8D8VB1YhIhEZoKzTZksceLNLGPjiTI7 Zv3T6Fi6ZoHV+53eDAFg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1o3vpo-009HaZ-2c; Wed, 22 Jun 2022 08:35:36 +0000 Received: from us-smtp-delivery-124.mimecast.com ([170.10.133.124]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1o3vpl-009HZ7-AO for linux-arm-kernel@lists.infradead.org; Wed, 22 Jun 2022 08:35:34 +0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1655886932; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=FXbG8YAYtCcHZc9mu4c2Nm+a/RYeu4/cnHXEYuhco7U=; b=Ser6UZ/nzuVmLgvFrIKbvcJHfWx24Hd/Eyk1SlwUeTfjhRSIGW0aaSOggz1F9Ffv0Hadxv jy/ajKa7Pc6DEnbyH8xRdcy3ws9uYEnusBJMnlXpBmDtZ3nPSOXNwMKIbB1fYJ3hmAgG9j Zh5y7ITetd7lWa3I01ovfVHR44Owe5E= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-44-MIsd6_-XM4S793yov44VGw-1; Wed, 22 Jun 2022 04:35:25 -0400 X-MC-Unique: MIsd6_-XM4S793yov44VGw-1 Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.rdu2.redhat.com [10.11.54.8]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 094F48001EA; Wed, 22 Jun 2022 08:35:24 +0000 (UTC) Received: from localhost (ovpn-13-227.pek2.redhat.com [10.72.13.227]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 33F7CC28115; Wed, 22 Jun 2022 08:35:22 +0000 (UTC) Date: Wed, 22 Jun 2022 16:35:16 +0800 From: Baoquan He To: Catalin Marinas Cc: Kefeng Wang , Zhen Lei , Ard Biesheuvel , Mark Rutland , Thomas Gleixner , Ingo Molnar , Borislav Petkov , x86@kernel.org, "H . Peter Anvin" , Eric Biederman , Rob Herring , Frank Rowand , devicetree@vger.kernel.org, Dave Young , Vivek Goyal , kexec@lists.infradead.org, linux-kernel@vger.kernel.org, Will Deacon , linux-arm-kernel@lists.infradead.org, Jonathan Corbet , linux-doc@vger.kernel.org, Randy Dunlap , Feng Zhou , Chen Zhou , John Donnelly , Dave Kleikamp , liushixin Subject: Re: [PATCH 5/5] arm64: kdump: Don't defer the reservation of crash high memory Message-ID: References: <20220613080932.663-1-thunder.leizhen@huawei.com> <20220613080932.663-6-thunder.leizhen@huawei.com> <3f66323d-f371-b931-65fb-edfae0f01c88@huawei.com> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: X-Scanned-By: MIMEDefang 2.85 on 10.11.54.8 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220622_013533_465553_7DECE08F X-CRM114-Status: GOOD ( 41.95 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="iso-8859-1" Content-Transfer-Encoding: quoted-printable Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Hi Catalin, On 06/21/22 at 07:04pm, Catalin Marinas wrote: > On Tue, Jun 21, 2022 at 02:24:01PM +0800, Kefeng Wang wrote: > > On 2022/6/21 13:33, Baoquan He wrote: > > > On 06/13/22 at 04:09pm, Zhen Lei wrote: > > > > If the crashkernel has both high memory above DMA zones and low mem= ory > > > > in DMA zones, kexec always loads the content such as Image and dtb = to the > > > > high memory instead of the low memory. This means that only high me= mory > > > > requires write protection based on page-level mapping. The allocati= on of > > > > high memory does not depend on the DMA boundary. So we can reserve = the > > > > high memory first even if the crashkernel reservation is deferred. > > > > = > > > > This means that the block mapping can still be performed on other k= ernel > > > > linear address spaces, the TLB miss rate can be reduced and the sys= tem > > > > performance will be improved. > > > = > > > Ugh, this looks a little ugly, honestly. > > > = > > > If that's for sure arm64 can't split large page mapping of linear > > > region, this patch is one way to optimize linear mapping. Given kdump > > > setting is necessary on arm64 server, the booting speed is truly > > > impacted heavily. > > = > > Is there some conclusion or discussion that arm64 can't split large page > > mapping? > > = > > Could the crashkernel reservation (and Kfence pool) be splited dynamica= lly? > > = > > I found Mark replay "arm64: remove page granularity limitation from > > KFENCE"[1], > > = > > =A0 "We also avoid live changes from block<->table mappings, since the > > =A0 archtitecture gives us very weak guarantees there and generally req= uires > > =A0 a Break-Before-Make sequence (though IIRC this was tightened up > > =A0 somewhat, so maybe going one way is supposed to work). Unless it's > > =A0 really necessary, I'd rather not split these block mappings while > > =A0 they're live." > = > The problem with splitting is that you can end up with two entries in > the TLB for the same VA->PA mapping (e.g. one for a 4KB page and another > for a 2MB block). In the lucky case, the CPU will trigger a TLB conflict > abort (but can be worse like loss of coherency). Thanks for this explanation. Is this a drawback of arm64 design? X86 code do the same thing w/o issue, is there way to overcome this on arm64 from hardware or software side? I ever got a arm64 server with huge memory, w or w/o crashkernel setting = have different bootup time. And the more often TLB miss and flush will cause performance cost. It is really a pity if we have very powerful arm64 cpu and system capacity, but bottlenecked by this drawback. > = > Prior to FEAT_BBM (added in ARMv8.4), such scenario was not allowed at > all, the software would have to unmap the range, TLBI, remap. With > FEAT_BBM (level 2), we can do this without tearing the mapping down but > we still need to handle the potential TLB conflict abort. The handler > only needs a TLBI but if it touches the memory range being changed it > risks faulting again. With vmap stacks and the kernel image mapped in > the vmalloc space, we have a small window where this could be handled > but we probably can't go into the C part of the exception handling > (tracing etc. may access a kmalloc'ed object for example). > = > Another option is to do a stop_machine() (if multi-processor at that > point), disable the MMUs, modify the page tables, re-enable the MMU but > it's also complicated. > = > -- = > Catalin > = _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel