From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6F519C433FE for ; Thu, 10 Feb 2022 23:14:27 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1345530AbiBJXOZ (ORCPT ); Thu, 10 Feb 2022 18:14:25 -0500 Received: from mxb-00190b01.gslb.pphosted.com ([23.128.96.19]:33754 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S243041AbiBJXOY (ORCPT ); Thu, 10 Feb 2022 18:14:24 -0500 Received: from mail-pj1-x102e.google.com (mail-pj1-x102e.google.com [IPv6:2607:f8b0:4864:20::102e]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3549E5F44 for ; Thu, 10 Feb 2022 15:14:24 -0800 (PST) Received: by mail-pj1-x102e.google.com with SMTP id h7-20020a17090a648700b001b927560c2bso5915637pjj.1 for ; Thu, 10 Feb 2022 15:14:24 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to; bh=zVPS2ok9/nx7180z6dx4Z4VBr5ge+N4XNxLNYfv1b4E=; b=VyyafF8z0UHAylczbPUn4iflRukwWBbb+WUXy47xyXqkhIbDPqr8QWGmQRK08hrtkt IUUtWOTxMrtxTQUmV5kZYgN3t5yJfO7EHgX1B94/+OGtyIuYZf67FGfL78/A1VRN1qcR yZHUxbG//rJmxAlIy7hmpGz03vxelXwsVQjC0dYELgXlme1cj3t4zxZ+Lt5i3/buYshI +C49g6w39oWHqkT88ERKBpyaHXMiJ7pIKTD7TDNl2iFt4ZRdymGwPi8c3Vn1JgLkNuU9 663OJuen9xmZPOyn+rAyIydumJ/0FtLmSD9sd9DH8klMuOW6aTJEkunzim8z31w/hhgh 5aBg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to; bh=zVPS2ok9/nx7180z6dx4Z4VBr5ge+N4XNxLNYfv1b4E=; b=kDkee7lcQdbgYvY52KvRpE5EfGhAgt1TIX9kDshR3o+4aY/bHrYi/caHH/eEPx/sOp +MuJRTSVB9HRpHrsD+FQqvezS9JjEhV7Pp5U2wlwcdNsc72WO/9kT8OM1EDRlnAZGZXa w4pdPbCSGGluy50MTTwOOvQt0DxO/Yg6fBVEDKSMG/M0PNK2Ml1KgPPQHPxzPIO0Uz/m DA/AbslJFJc0yJ1svHvoyzzSGLOHV6OfXZw01he79jsMAnaNtz1OjMFf8SrmqELxmhEF nwoUzcJkiZxbEINRsdFuMdveBLXTn9wadtCUg4QeteQIe62DTR3taA1No6XN6jqyG01I o8vw== X-Gm-Message-State: AOAM532icZsE0Ql7VoUy8bLzZABt9xoeiDNG4Sg0AuYML/tjYxWVgLpR dad4wmYrHcOceFFqxJ0Fc8+dTg== X-Google-Smtp-Source: ABdhPJw188UyNNoqFlMqDgKpm7hdg8bXICpgwoP46Vv6pmyTqCweCur2W7sJkfRIks5cXCOoPrUP5Q== X-Received: by 2002:a17:902:f789:: with SMTP id q9mr9618229pln.135.1644534863552; Thu, 10 Feb 2022 15:14:23 -0800 (PST) Received: from google.com (157.214.185.35.bc.googleusercontent.com. [35.185.214.157]) by smtp.gmail.com with ESMTPSA id k16sm24789705pfu.140.2022.02.10.15.14.22 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 10 Feb 2022 15:14:22 -0800 (PST) Date: Thu, 10 Feb 2022 23:14:19 +0000 From: Sean Christopherson To: Paolo Bonzini Cc: linux-kernel@vger.kernel.org, kvm@vger.kernel.org, vkuznets@redhat.com, mlevitsk@redhat.com, dmatlack@google.com Subject: Re: [PATCH 03/12] KVM: x86: do not deliver asynchronous page faults if CR0.PG=0 Message-ID: References: <20220209170020.1775368-1-pbonzini@redhat.com> <20220209170020.1775368-4-pbonzini@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, Feb 10, 2022, Sean Christopherson wrote: > On Wed, Feb 09, 2022, Paolo Bonzini wrote: > > Enabling async page faults is nonsensical if paging is disabled, but > > it is allowed because CR0.PG=0 does not clear the async page fault > > MSR. Just ignore them and only use the artificial halt state, > > similar to what happens in guest mode if async #PF vmexits are disabled. > > > > Signed-off-by: Paolo Bonzini > > --- > > arch/x86/kvm/x86.c | 4 +++- > > 1 file changed, 3 insertions(+), 1 deletion(-) > > > > diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c > > index 5e1298aef9e2..98aca0f2af12 100644 > > --- a/arch/x86/kvm/x86.c > > +++ b/arch/x86/kvm/x86.c > > @@ -12272,7 +12272,9 @@ static inline bool apf_pageready_slot_free(struct kvm_vcpu *vcpu) > > > > static bool kvm_can_deliver_async_pf(struct kvm_vcpu *vcpu) > > { > > - if (!vcpu->arch.apf.delivery_as_pf_vmexit && is_guest_mode(vcpu)) > > + if (is_guest_mode(vcpu) > > + ? !vcpu->arch.apf.delivery_as_pf_vmexit > > + : !is_cr0_pg(vcpu->arch.mmu)) > > As suggested in the previous patch, is_paging(vcpu). > > I find a more tradition if-elif marginally easier to understand the implication > that CR0.PG is L2's CR0 and thus irrelevant if is_guest_mode()==true. Not a big > deal though. > > if (is_guest_mode(vcpu)) { > if (!vcpu->arch.apf.delivery_as_pf_vmexit) > return false; > } else if (!is_paging(vcpu)) { > return false; > } Alternatively, what about reordering and refactoring to yield: if (kvm_pv_async_pf_enabled(vcpu)) return false; if (vcpu->arch.apf.send_user_only && static_call(kvm_x86_get_cpl)(vcpu) == 0) return false; /* L1 CR0.PG=1 is guaranteed if the vCPU is in guest mode (L2). */ if (is_guest_mode(vcpu) return !vcpu->arch.apf.delivery_as_pf_vmexit; return is_cr0_pg(vcpu->arch.mmu); There isn't any need to "batch" the if statements.