From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 82FB8C00144 for ; Mon, 1 Aug 2022 23:28:05 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234741AbiHAX2D (ORCPT ); Mon, 1 Aug 2022 19:28:03 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45126 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233923AbiHAX17 (ORCPT ); Mon, 1 Aug 2022 19:27:59 -0400 Received: from mail-pl1-x62a.google.com (mail-pl1-x62a.google.com [IPv6:2607:f8b0:4864:20::62a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DC4271C90B for ; Mon, 1 Aug 2022 16:27:56 -0700 (PDT) Received: by mail-pl1-x62a.google.com with SMTP id x23so3216228pll.7 for ; Mon, 01 Aug 2022 16:27:56 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:from:to:cc; bh=/tTorCEzOmnUDzNhkZIciJ+wzNRg3nenVtbBYRc4+HY=; b=HLSxTUhjIGSoKgrpJTvEe2BSJUu8HkdegkhTgKCCzrprkf+z0kAg6VZmfFraeURLHp Q9yAx517aQZdg3uE6aHTtphBlf2HbSaoAu7IplrVUxdjdcr6ZlUbngoqSDy38iIIyipJ MIKZ171KgXMpKG0Wr6UtTfpvvdhUOgTNY8Sb4EZvT3T6CtC5cRGlv65e44FVKB1IJ5h1 FhwhGKcXEsMnZvo3/RI18OVks1epLPGjShn145SqTrgm585AVZrDCmX4xu47TJp+4F2P XWnbH4Lg+yOehCBTVKDZesa+l1440fhkxKJ6OdxdPkA3fHtsoJyyVmgcNJveE0fm5QDp bmkg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:x-gm-message-state:from:to:cc; bh=/tTorCEzOmnUDzNhkZIciJ+wzNRg3nenVtbBYRc4+HY=; b=wFhaZqb7PG/UN1dJpGaVfnU5idW/dsImmhzE5Hep7/rq7K+GwwB7D69u35jGICEdNf rzEvjOfHnNXB9PtXK0gOv0k9/WJiKO6aYOY9ZdCVJvibpP4OKIQAMQSo/yz5jQX2Oq4L V9NgoEvOXyCPEPvtcQisPK/iT1uXR+JKrPS/YOVUTKKs5SVWKFMwIC1xO9JA6VWvviIl ZMPZ6REC+wBrsIEX67FsL3vXCj3m/6A3l6RuhglVUjaj1k5NgkT3PmdLEip/oG3jEmwU c3JFNogXlvhkWTI8qI/75eBviqyt7tZ+gncPBBEr/K89Ua1gFI05+PCizvl3uSjgY/DD Q76Q== X-Gm-Message-State: ACgBeo2Tu+VtrKDLvcz0Ks7CPLORU5bp3qmXS1CZcoPLIN1g3i2TEyD3 6n8UFEg0QuR5MdAZVUBkpeNqfw== X-Google-Smtp-Source: AA6agR5W3KyfYb0Ah0rzkQsVS3EmFauvkZr3Wm3ya6FofXb6KmVQ9TDyKYHTIH7cB5T2IOiTJJP76Q== X-Received: by 2002:a17:902:f609:b0:168:dcbe:7c4d with SMTP id n9-20020a170902f60900b00168dcbe7c4dmr18480479plg.169.1659396476264; Mon, 01 Aug 2022 16:27:56 -0700 (PDT) Received: from google.com (7.104.168.34.bc.googleusercontent.com. [34.168.104.7]) by smtp.gmail.com with ESMTPSA id l5-20020a170903120500b0016d62ba5665sm10396763plh.254.2022.08.01.16.27.55 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 01 Aug 2022 16:27:55 -0700 (PDT) Date: Mon, 1 Aug 2022 23:27:52 +0000 From: Sean Christopherson To: David Matlack Cc: isaku.yamahata@intel.com, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, isaku.yamahata@gmail.com, Paolo Bonzini , erdemaktas@google.com, Sagi Shahar Subject: Re: [RFC PATCH v6 036/104] KVM: x86/mmu: Explicitly check for MMIO spte in fast page fault Message-ID: References: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, Aug 01, 2022, David Matlack wrote: > On Thu, May 05, 2022 at 11:14:30AM -0700, isaku.yamahata@intel.com wrote: > > From: Sean Christopherson > > > > Explicitly check for an MMIO spte in the fast page fault flow. TDX will > > use a not-present entry for MMIO sptes, which can be mistaken for an > > access-tracked spte since both have SPTE_SPECIAL_MASK set. > > > > MMIO sptes are handled in handle_mmio_page_fault for non-TDX VMs, so this > > patch does not affect them. TDX will handle MMIO emulation through a > > hypercall instead. > > > > Signed-off-by: Sean Christopherson > > Signed-off-by: Isaku Yamahata > > --- > > arch/x86/kvm/mmu/mmu.c | 2 +- > > 1 file changed, 1 insertion(+), 1 deletion(-) > > > > diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c > > index d1c37295bb6e..4a12d862bbb6 100644 > > --- a/arch/x86/kvm/mmu/mmu.c > > +++ b/arch/x86/kvm/mmu/mmu.c > > @@ -3184,7 +3184,7 @@ static int fast_page_fault(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) > > else > > sptep = fast_pf_get_last_sptep(vcpu, fault->addr, &spte); > > > > - if (!is_shadow_present_pte(spte)) > > + if (!is_shadow_present_pte(spte) || is_mmio_spte(spte)) > > I wonder if this patch is really necessary. is_shadow_present_pte() > checks if SPTE_MMU_PRESENT_MASK is set (which is bit 11, not > shadow_present_mask). Do TDX VMs set bit 11 in MMIO SPTEs? This patch should be unnecessary, TDX's not-present SPTEs was one of my motivations for adding MMU_PRESENT. Bit 11 most definitely must not be set for MMIO SPTEs.