From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 397F5C433EF for ; Tue, 15 Mar 2022 08:51:05 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1346143AbiCOIwO (ORCPT ); Tue, 15 Mar 2022 04:52:14 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41340 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1346222AbiCOIwN (ORCPT ); Tue, 15 Mar 2022 04:52:13 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 156944D636 for ; Tue, 15 Mar 2022 01:51:02 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1647334261; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=iJfPNwORsYGtURUoL7s3z0la28b4kkixiHIx/VOk9p4=; b=hfPb0Pojntqlp6RGC1S57T1hZX92joP6yxW/qt3Cqk6ZPNjdUkAZxsg8FCl/br7d7FEUnP h0Lt6V/9prauaycxcs/qpe8lJFtXKGqwG5xpBpPgZTlXq+rfbwpYbcYjkY5C/IZEeS0kKC qIfypCv6SDGPt4wyudz1d36UpYeqNHk= Received: from mail-pl1-f200.google.com (mail-pl1-f200.google.com [209.85.214.200]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-607-1keViMoUPTiWOXpTXuADzA-1; Tue, 15 Mar 2022 04:51:00 -0400 X-MC-Unique: 1keViMoUPTiWOXpTXuADzA-1 Received: by mail-pl1-f200.google.com with SMTP id z13-20020a1709027e8d00b001518de7a06cso7973630pla.14 for ; Tue, 15 Mar 2022 01:50:59 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to; bh=iJfPNwORsYGtURUoL7s3z0la28b4kkixiHIx/VOk9p4=; b=DHMHNFix4ebun52x9shHgz5rzN0Qk5wpCRAcr+2mgSY6ZjIFNIzUGR0nybaPzMGsO+ QyoApljYfSh1rl9pbILejSgV4VEMR5p9gj5l5q9+JLRuifrhYQ10Jalw4qq6zUaE0mvs BzsH5r/UPmFVWyvqUQZ6CxV84hoT7g5cCCPWdJTanBuaxtCvJv5nojxYzOgMK6CVrIG/ INNco9DmD/B6BqolwHIcFx2ELltiuP+7ff2z2/5uMi6XVgf7Cc7kmp+Qnq4ZIrr3pQxo qdy6QF1A85TV8Gd+eDnqq0D+a0EKBm+pQLREwpItReHHu8DgMlOgP8l32qox7O/sg0MR 8Seg== X-Gm-Message-State: AOAM532Eif4tqbn7ufy4nyelqRf3yPuR5I+p+q1DqMnJpuVpfZS7SqKD 3kaN/yhoXZE18yyqzHEn4nGCPsSmydMj8S9QALxO7yAEMPXqY7/8VX0Mnc0MCsuPh7jo3IxbJzJ kqbpHtnS/S4UHMQBUxFsYvQ== X-Received: by 2002:a17:903:3112:b0:151:a940:1574 with SMTP id w18-20020a170903311200b00151a9401574mr27501168plc.63.1647334258847; Tue, 15 Mar 2022 01:50:58 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxHWj9bKrdm/9Y/tsKI6ZrYi5hSyGCM7zNyQtEZKMz/CbNWKzPkd9lMjLNsISJclLrmbrGh3w== X-Received: by 2002:a17:903:3112:b0:151:a940:1574 with SMTP id w18-20020a170903311200b00151a9401574mr27501135plc.63.1647334258490; Tue, 15 Mar 2022 01:50:58 -0700 (PDT) Received: from xz-m1.local ([191.101.132.43]) by smtp.gmail.com with ESMTPSA id gb5-20020a17090b060500b001bd312f7396sm2197473pjb.45.2022.03.15.01.50.53 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 15 Mar 2022 01:50:58 -0700 (PDT) Date: Tue, 15 Mar 2022 16:50:50 +0800 From: Peter Xu To: David Matlack Cc: Paolo Bonzini , Marc Zyngier , Huacai Chen , Aleksandar Markovic , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Sean Christopherson , Andrew Jones , Ben Gardon , maciej.szmigiero@oracle.com, "moderated list:KERNEL VIRTUAL MACHINE FOR ARM64 (KVM/arm64)" , "open list:KERNEL VIRTUAL MACHINE FOR MIPS (KVM/mips)" , "open list:KERNEL VIRTUAL MACHINE FOR MIPS (KVM/mips)" , "open list:KERNEL VIRTUAL MACHINE FOR RISC-V (KVM/riscv)" , Peter Feiner Subject: Re: [PATCH v2 04/26] KVM: x86/mmu: Decompose kvm_mmu_get_page() into separate functions Message-ID: References: <20220311002528.2230172-1-dmatlack@google.com> <20220311002528.2230172-5-dmatlack@google.com> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline In-Reply-To: <20220311002528.2230172-5-dmatlack@google.com> Precedence: bulk List-ID: X-Mailing-List: linux-mips@vger.kernel.org On Fri, Mar 11, 2022 at 12:25:06AM +0000, David Matlack wrote: > Decompose kvm_mmu_get_page() into separate helper functions to increase > readability and prepare for allocating shadow pages without a vcpu > pointer. > > Specifically, pull the guts of kvm_mmu_get_page() into 3 helper > functions: > > __kvm_mmu_find_shadow_page() - > Walks the page hash checking for any existing mmu pages that match the > given gfn and role. Does not attempt to synchronize the page if it is > unsync. > > kvm_mmu_find_shadow_page() - > Wraps __kvm_mmu_find_shadow_page() and handles syncing if necessary. > > kvm_mmu_new_shadow_page() > Allocates and initializes an entirely new kvm_mmu_page. This currently > requries a vcpu pointer for allocation and looking up the memslot but > that will be removed in a future commit. > > Note, kvm_mmu_new_shadow_page() is temporary and will be removed in a > subsequent commit. The name uses "new" rather than the more typical > "alloc" to avoid clashing with the existing kvm_mmu_alloc_page(). > > No functional change intended. > > Signed-off-by: David Matlack Looks good to me, a few nitpicks and questions below. > --- > arch/x86/kvm/mmu/mmu.c | 132 ++++++++++++++++++++++++--------- > arch/x86/kvm/mmu/paging_tmpl.h | 5 +- > arch/x86/kvm/mmu/spte.c | 5 +- > 3 files changed, 101 insertions(+), 41 deletions(-) > > diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c > index 23c2004c6435..80dbfe07c87b 100644 > --- a/arch/x86/kvm/mmu/mmu.c > +++ b/arch/x86/kvm/mmu/mmu.c > @@ -2027,16 +2027,25 @@ static void clear_sp_write_flooding_count(u64 *spte) > __clear_sp_write_flooding_count(sptep_to_sp(spte)); > } > > -static struct kvm_mmu_page *kvm_mmu_get_page(struct kvm_vcpu *vcpu, gfn_t gfn, > - union kvm_mmu_page_role role) > +/* > + * Searches for an existing SP for the given gfn and role. Makes no attempt to > + * sync the SP if it is marked unsync. > + * > + * If creating an upper-level page table, zaps unsynced pages for the same > + * gfn and adds them to the invalid_list. It's the callers responsibility > + * to call kvm_mmu_commit_zap_page() on invalid_list. > + */ > +static struct kvm_mmu_page *__kvm_mmu_find_shadow_page(struct kvm *kvm, > + gfn_t gfn, > + union kvm_mmu_page_role role, > + struct list_head *invalid_list) > { > struct hlist_head *sp_list; > struct kvm_mmu_page *sp; > int collisions = 0; > - LIST_HEAD(invalid_list); > > - sp_list = &vcpu->kvm->arch.mmu_page_hash[kvm_page_table_hashfn(gfn)]; > - for_each_valid_sp(vcpu->kvm, sp, sp_list) { > + sp_list = &kvm->arch.mmu_page_hash[kvm_page_table_hashfn(gfn)]; > + for_each_valid_sp(kvm, sp, sp_list) { > if (sp->gfn != gfn) { > collisions++; > continue; > @@ -2053,60 +2062,109 @@ static struct kvm_mmu_page *kvm_mmu_get_page(struct kvm_vcpu *vcpu, gfn_t gfn, > * upper-level page will be write-protected. > */ > if (role.level > PG_LEVEL_4K && sp->unsync) > - kvm_mmu_prepare_zap_page(vcpu->kvm, sp, > - &invalid_list); > + kvm_mmu_prepare_zap_page(kvm, sp, invalid_list); > + > continue; > } > > - /* unsync and write-flooding only apply to indirect SPs. */ > - if (sp->role.direct) > - goto trace_get_page; > + /* Write-flooding is only tracked for indirect SPs. */ > + if (!sp->role.direct) > + __clear_sp_write_flooding_count(sp); > > - if (sp->unsync) { > - /* > - * The page is good, but is stale. kvm_sync_page does > - * get the latest guest state, but (unlike mmu_unsync_children) > - * it doesn't write-protect the page or mark it synchronized! > - * This way the validity of the mapping is ensured, but the > - * overhead of write protection is not incurred until the > - * guest invalidates the TLB mapping. This allows multiple > - * SPs for a single gfn to be unsync. > - * > - * If the sync fails, the page is zapped. If so, break > - * in order to rebuild it. > - */ > - if (!kvm_sync_page(vcpu, sp, &invalid_list)) > - break; > + goto out; > + } > > - WARN_ON(!list_empty(&invalid_list)); > - kvm_flush_remote_tlbs(vcpu->kvm); > - } > + sp = NULL; > > - __clear_sp_write_flooding_count(sp); > +out: > + if (collisions > kvm->stat.max_mmu_page_hash_collisions) > + kvm->stat.max_mmu_page_hash_collisions = collisions; > + > + return sp; > +} > > -trace_get_page: > - trace_kvm_mmu_get_page(sp, false); > +/* > + * Looks up an existing SP for the given gfn and role if one exists. The > + * return SP is guaranteed to be synced. > + */ > +static struct kvm_mmu_page *kvm_mmu_find_shadow_page(struct kvm_vcpu *vcpu, > + gfn_t gfn, > + union kvm_mmu_page_role role) > +{ > + struct kvm_mmu_page *sp; > + LIST_HEAD(invalid_list); > + > + sp = __kvm_mmu_find_shadow_page(vcpu->kvm, gfn, role, &invalid_list); > + if (!sp) > goto out; > + > + if (sp->unsync) { > + /* > + * The page is good, but is stale. kvm_sync_page does > + * get the latest guest state, but (unlike mmu_unsync_children) > + * it doesn't write-protect the page or mark it synchronized! > + * This way the validity of the mapping is ensured, but the > + * overhead of write protection is not incurred until the > + * guest invalidates the TLB mapping. This allows multiple > + * SPs for a single gfn to be unsync. > + * > + * If the sync fails, the page is zapped and added to the > + * invalid_list. > + */ > + if (!kvm_sync_page(vcpu, sp, &invalid_list)) { > + sp = NULL; > + goto out; > + } > + > + WARN_ON(!list_empty(&invalid_list)); Not related to this patch because I think it's a pure movement here, however I have a question on why invalid_list is guaranteed to be empty.. I'm thinking the case where when lookup the page we could have already called kvm_mmu_prepare_zap_page() there, then when reach here (which is the kvm_sync_page==true case) invalid_list shouldn't be touched in kvm_sync_page(), so it looks possible that it still contains some page to be commited? > + kvm_flush_remote_tlbs(vcpu->kvm); > } > > +out: I'm wondering whether this "out" can be dropped.. with something like: sp = __kvm_mmu_find_shadow_page(...); if (sp && sp->unsync) { if (kvm_sync_page(vcpu, sp, &invalid_list)) { .. } else { sp = NULL; } } [...] > +static struct kvm_mmu_page *kvm_mmu_get_page(struct kvm_vcpu *vcpu, gfn_t gfn, > + union kvm_mmu_page_role role) > +{ > + struct kvm_mmu_page *sp; > + bool created = false; > + > + sp = kvm_mmu_find_shadow_page(vcpu, gfn, role); > + if (sp) > + goto out; > + > + created = true; > + sp = kvm_mmu_new_shadow_page(vcpu, gfn, role); > + > +out: > + trace_kvm_mmu_get_page(sp, created); > return sp; Same here, wondering whether we could drop the "out" by: sp = kvm_mmu_find_shadow_page(vcpu, gfn, role); if (!sp) { created = true; sp = kvm_mmu_new_shadow_page(vcpu, gfn, role); } trace_kvm_mmu_get_page(sp, created); return sp; Thanks, -- Peter Xu