From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 284DAC19F2C for ; Mon, 1 Aug 2022 22:27:25 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235364AbiHAW1X (ORCPT ); Mon, 1 Aug 2022 18:27:23 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41254 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235352AbiHAW1T (ORCPT ); Mon, 1 Aug 2022 18:27:19 -0400 Received: from mail-pg1-x52f.google.com (mail-pg1-x52f.google.com [IPv6:2607:f8b0:4864:20::52f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B9D4A43E7E for ; Mon, 1 Aug 2022 15:27:17 -0700 (PDT) Received: by mail-pg1-x52f.google.com with SMTP id s206so10861252pgs.3 for ; Mon, 01 Aug 2022 15:27:17 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:from:to:cc; bh=3Q6E/4wJML3+dZQ/vFh6w9chsrslWFwyy3w4tz3NuUc=; b=hcOlhhGydI4ECtzVfXVnZaKOgxOl8FahXdD9fNBauhkCdPWTAh8qFJyhhY7raFH22u WTdIwKlIcEmtgNEEx3fdTagjbUtlNaOHOl3zA2QEp8GeeNCtxVrtjSyB5Jx28F7xxlqw oupS5xVbDmjYhy1ygwBI/xPr8EfjOjOUwlH3KZbjxiCcph13YStezRq9PUo3Gso+4NAy U8wHR8YsA11fuUp/7V9Yx/AMlWRznKILNV6bWDoBXIDAEcFS1BaUWor3IWL7Vi5wg/45 uM4JBJsyuaUfcklg4Pxe4QqIO2fdjhqFOU3Y0BOznNN8HRrvIIxNxi/wIFotdzBkJBIC LyrQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:x-gm-message-state:from:to:cc; bh=3Q6E/4wJML3+dZQ/vFh6w9chsrslWFwyy3w4tz3NuUc=; b=AN96MBVQcp54l9ekk+cq/Z1i854QhV0r3DJExdxpfoX9ngpCXBwpZjFrgY0bPRa+MB arKcsx5kS67QDkSUW/F+ZYDJZoJ9X3TZ7d4u/Z9dUbVKmNoUNhWsudqBYT5ePQg1s6+J xLTFOdmxMZlivjxeXXfH2EJJ1wCyS2UZug0zcb1f3axVZZBV40QTv0PTtpdkkiZ4Hw0R wuPM9pN8JCaADJhy+T2p4OarPRH83u2KAeLxbR0YW9PLmwlPBA77hCfnZ05vGsEelCXz dffL2V63/HP192XpZSf3qH1iNeAPkRq+AgHlupCVOL3rTifbWTXKhbZg2Xkrgdbrlzdt d6sg== X-Gm-Message-State: AJIora/ajGTrCNPxFNH6m78OgBnVaNr3Fj0+Z3GXP7G/YA5FbF+wBW6V ZgsL+ITU09YqKNLWXaFjt/MDVw== X-Google-Smtp-Source: AGRyM1s3ynLlbdGrGBkXGTOvAw+2jmNVdIZP7eY0n5rff3bGV8eDhej4XQUrnY70PPxDt8Q2OovaZA== X-Received: by 2002:a05:6a00:1745:b0:52a:f0d3:ae7 with SMTP id j5-20020a056a00174500b0052af0d30ae7mr18659685pfc.72.1659392836646; Mon, 01 Aug 2022 15:27:16 -0700 (PDT) Received: from google.com (223.103.125.34.bc.googleusercontent.com. [34.125.103.223]) by smtp.gmail.com with ESMTPSA id oo8-20020a17090b1c8800b001f3244768d4sm9392005pjb.13.2022.08.01.15.27.15 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 01 Aug 2022 15:27:15 -0700 (PDT) Date: Mon, 1 Aug 2022 15:27:11 -0700 From: David Matlack To: isaku.yamahata@intel.com Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, isaku.yamahata@gmail.com, Paolo Bonzini , erdemaktas@google.com, Sean Christopherson , Sagi Shahar Subject: Re: [RFC PATCH v6 036/104] KVM: x86/mmu: Explicitly check for MMIO spte in fast page fault Message-ID: References: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, May 05, 2022 at 11:14:30AM -0700, isaku.yamahata@intel.com wrote: > From: Sean Christopherson > > Explicitly check for an MMIO spte in the fast page fault flow. TDX will > use a not-present entry for MMIO sptes, which can be mistaken for an > access-tracked spte since both have SPTE_SPECIAL_MASK set. > > MMIO sptes are handled in handle_mmio_page_fault for non-TDX VMs, so this > patch does not affect them. TDX will handle MMIO emulation through a > hypercall instead. > > Signed-off-by: Sean Christopherson > Signed-off-by: Isaku Yamahata > --- > arch/x86/kvm/mmu/mmu.c | 2 +- > 1 file changed, 1 insertion(+), 1 deletion(-) > > diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c > index d1c37295bb6e..4a12d862bbb6 100644 > --- a/arch/x86/kvm/mmu/mmu.c > +++ b/arch/x86/kvm/mmu/mmu.c > @@ -3184,7 +3184,7 @@ static int fast_page_fault(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) > else > sptep = fast_pf_get_last_sptep(vcpu, fault->addr, &spte); > > - if (!is_shadow_present_pte(spte)) > + if (!is_shadow_present_pte(spte) || is_mmio_spte(spte)) I wonder if this patch is really necessary. is_shadow_present_pte() checks if SPTE_MMU_PRESENT_MASK is set (which is bit 11, not shadow_present_mask). Do TDX VMs set bit 11 in MMIO SPTEs? > break; > > sp = sptep_to_sp(sptep); > -- > 2.25.1 >