From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-14.0 required=3.0 tests=BAYES_00,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 80647C433E0 for ; Tue, 9 Mar 2021 08:44:31 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 4A8C76527C for ; Tue, 9 Mar 2021 08:44:31 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229875AbhCIIoA convert rfc822-to-8bit (ORCPT ); Tue, 9 Mar 2021 03:44:00 -0500 Received: from mail.kernel.org ([198.145.29.99]:41830 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229515AbhCIIn4 (ORCPT ); Tue, 9 Mar 2021 03:43:56 -0500 Received: from disco-boy.misterjones.org (disco-boy.misterjones.org [51.254.78.96]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id DB0FA6525D; Tue, 9 Mar 2021 08:43:55 +0000 (UTC) Received: from 78.163-31-62.static.virginmediabusiness.co.uk ([62.31.163.78] helo=why.misterjones.org) by disco-boy.misterjones.org with esmtpsa (TLS1.3) tls TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (Exim 4.94) (envelope-from ) id 1lJXy5-000UxK-Rc; Tue, 09 Mar 2021 08:43:53 +0000 Date: Tue, 09 Mar 2021 08:43:52 +0000 Message-ID: <87y2ewyawn.wl-maz@kernel.org> From: Marc Zyngier To: "wangyanan (Y)" Cc: Will Deacon , Catalin Marinas , James Morse , Julien Thierry , Suzuki K Poulose , , , , , , Subject: Re: [PATCH 2/2] KVM: arm64: Skip the cache flush when coalescing tables into a block In-Reply-To: <8a947c73-16e9-7ca7-c185-d4c951938505@huawei.com> References: <20210125141044.380156-1-wangyanan55@huawei.com> <20210125141044.380156-3-wangyanan55@huawei.com> <20210308163454.GA26561@willie-the-truck> <8a947c73-16e9-7ca7-c185-d4c951938505@huawei.com> User-Agent: Wanderlust/2.15.9 (Almost Unreal) SEMI-EPG/1.14.7 (Harue) FLIM-LB/1.14.9 (=?UTF-8?B?R29qxY0=?=) APEL-LB/10.8 EasyPG/1.0.0 Emacs/27.1 (x86_64-pc-linux-gnu) MULE/6.0 (HANACHIRUSATO) MIME-Version: 1.0 (generated by SEMI-EPG 1.14.7 - "Harue") Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8BIT X-SA-Exim-Connect-IP: 62.31.163.78 X-SA-Exim-Rcpt-To: wangyanan55@huawei.com, will@kernel.org, catalin.marinas@arm.com, james.morse@arm.com, julien.thierry.kdev@gmail.com, suzuki.poulose@arm.com, kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, wanghaibin.wang@huawei.com, yuzenghui@huawei.com X-SA-Exim-Mail-From: maz@kernel.org X-SA-Exim-Scanned: No (on disco-boy.misterjones.org); SAEximRunCond expanded to false Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org On Tue, 09 Mar 2021 08:34:43 +0000, "wangyanan (Y)" wrote: > > > On 2021/3/9 0:34, Will Deacon wrote: > > On Mon, Jan 25, 2021 at 10:10:44PM +0800, Yanan Wang wrote: > >> After dirty-logging is stopped for a VM configured with huge mappings, > >> KVM will recover the table mappings back to block mappings. As we only > >> replace the existing page tables with a block entry and the cacheability > >> has not been changed, the cache maintenance opreations can be skipped. > >> > >> Signed-off-by: Yanan Wang > >> --- > >> arch/arm64/kvm/mmu.c | 12 +++++++++--- > >> 1 file changed, 9 insertions(+), 3 deletions(-) > >> > >> diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c > >> index 8e8549ea1d70..37b427dcbc4f 100644 > >> --- a/arch/arm64/kvm/mmu.c > >> +++ b/arch/arm64/kvm/mmu.c > >> @@ -744,7 +744,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, > >> { > >> int ret = 0; > >> bool write_fault, writable, force_pte = false; > >> - bool exec_fault; > >> + bool exec_fault, adjust_hugepage; > >> bool device = false; > >> unsigned long mmu_seq; > >> struct kvm *kvm = vcpu->kvm; > >> @@ -872,12 +872,18 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, > >> mark_page_dirty(kvm, gfn); > >> } > >> - if (fault_status != FSC_PERM && !device) > >> + /* > >> + * There is no necessity to perform cache maintenance operations if we > >> + * will only replace the existing table mappings with a block mapping. > >> + */ > >> + adjust_hugepage = fault_granule < vma_pagesize ? true : false; > > nit: you don't need the '? true : false' part > > > > That said, your previous patch checks for 'fault_granule > vma_pagesize', > > so I'm not sure the local variable helps all that much here because it > > obscures the size checks in my opinion. It would be more straight-forward > > if we could structure the logic as: > > > > > > if (fault_granule < vma_pagesize) { > > > > } else if (fault_granule > vma_page_size) { > > > > } else { > > > > } > > > > With some comments describing what we can infer about the memcache and cache > > maintenance requirements for each case. > Thanks for your suggestion here, Will. > But I have resent another newer series [1] (KVM: arm64: Improve > efficiency of stage2 page table) > recently, which has the same theme but different solutions that I > think are better. > [1] > https://lore.kernel.org/lkml/20210208112250.163568-1-wangyanan55@huawei.com/ > > Could you please comment on that series ?  I think it can be found in > your inbox :). There were already a bunch of comments on that series, and I stopped at the point where the cache maintenance was broken. Please respin that series if you want further feedback on it. In the future, if you deprecate a series (which is completely understandable), please leave a note on the list with a pointer to the new series so that people don't waste time reviewing an obsolete series. Or post the new series with a new version number so that it is obvious that the original series has been superseded. Thanks, M. -- Without deviation from the norm, progress is not possible.