From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id BD90DC433F5 for ; Thu, 7 Apr 2022 18:50:45 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1347002AbiDGSwo (ORCPT ); Thu, 7 Apr 2022 14:52:44 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52530 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1347007AbiDGSwf (ORCPT ); Thu, 7 Apr 2022 14:52:35 -0400 Received: from mail-pl1-x62e.google.com (mail-pl1-x62e.google.com [IPv6:2607:f8b0:4864:20::62e]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5A4E2FE429 for ; Thu, 7 Apr 2022 11:50:34 -0700 (PDT) Received: by mail-pl1-x62e.google.com with SMTP id c23so5870975plo.0 for ; Thu, 07 Apr 2022 11:50:34 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to; bh=yHTfa3Xf5bj7v/fUHWGj1N6h5CrQAHQI6sZ7eNnojW8=; b=MAnnbKl3cqxlzEfW7eMtsWDL6/da8Ii/qpuq+LbMqdikPYaNNDXjvG0o4ZaJ77kIp/ XeiGCvoVHahcA8MJ/GVgMZLNw4fiu2ZhjM0dAXIr+A3hSmSjZOu+mObBLP8OB45Wompp oVBAINJl+ZJ/yDQn/Rg1Oc2QIYNR8hSH+oftyDtkMoIkm1Q+f64f7114gxxd8PkglWbZ tm5ORBtDYcBBZqwZAgno9hczmEoZKVYaPPb3nzBX0aO3/1g/u7rveNmeBkLNZ4/GDuhQ OFFIm9KYH7v0iJzRM7Vp09oPbMFKtOTgYKjJbLGQ02uxW6sWMYzEDOa/huovuMYeTeO0 Mi4w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to; bh=yHTfa3Xf5bj7v/fUHWGj1N6h5CrQAHQI6sZ7eNnojW8=; b=JX+bkHeMkm4KEV0fDUG+jxqh+8CMFybpLfspkkInDViB4rUoi5nC40XFSjoSM83lAa JQ6PZDSVcSqgKU+aQmt87ukoNCclWGpyjkixKoKwJ3iF8acPixUCWUeEPYlS6oR1E4x9 qvReoWdLxDtBkiHgXQLjz/jc4K3wcLbFizDTbq+rFCKwCTod4ZSfNAou2v7C3UpYRELb T6okkrP0YHGK9QTmMObpV97CjuRYfc8VTSl3uXF0fdyso7qX5xdXGC4qsJcWV5fhZE+Q 4h7LXSICLrImfMQiEilfCQPbNXXjWckPNWhUTX7oNrDAosZfKojyr9lGSJuEb3HjhU33 OH8Q== X-Gm-Message-State: AOAM530HgTr+vsrlavw/x3B2KTra4S7KFbp3N8yJ0/6JCXVFfGwtZolN a/K8nbPq+kl57Ky+g4pPmL9slrzYcMIkjg== X-Google-Smtp-Source: ABdhPJwbpwlnXADYDuPZIElPaEMuJWqItAHdvKGTzpWfJgQW8OP0lTZa2kMX8zjoPNXQJ8ygePNB3A== X-Received: by 2002:a17:90b:1a87:b0:1c7:3d66:8cb with SMTP id ng7-20020a17090b1a8700b001c73d6608cbmr17530826pjb.142.1649357433648; Thu, 07 Apr 2022 11:50:33 -0700 (PDT) Received: from google.com (157.214.185.35.bc.googleusercontent.com. [35.185.214.157]) by smtp.gmail.com with ESMTPSA id x25-20020a056a000bd900b004faae43da95sm22338158pfu.138.2022.04.07.11.50.32 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 07 Apr 2022 11:50:33 -0700 (PDT) Date: Thu, 7 Apr 2022 18:50:29 +0000 From: Sean Christopherson To: Vitaly Kuznetsov Cc: kvm@vger.kernel.org, Paolo Bonzini , Wanpeng Li , Jim Mattson , Michael Kelley , Siddharth Chandrasekaran , linux-kernel@vger.kernel.org Subject: Re: [PATCH v2 18/31] KVM: nSVM: hyper-v: Direct TLB flush Message-ID: References: <20220407155645.940890-1-vkuznets@redhat.com> <20220407155645.940890-19-vkuznets@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20220407155645.940890-19-vkuznets@redhat.com> Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, Apr 07, 2022, Vitaly Kuznetsov wrote: > @@ -486,6 +487,17 @@ static void nested_save_pending_event_to_vmcb12(struct vcpu_svm *svm, > > static void nested_svm_transition_tlb_flush(struct kvm_vcpu *vcpu) > { > + /* > + * KVM_REQ_HV_TLB_FLUSH flushes entries from either L1's VPID or Can you use VP_ID or some variation to avoid "VPID"? This looks like a copy+paste from nVMX gone bad and will confuse the heck out of people that are more familiar with VMX's VPID. > + * L2's VPID upon request from the guest. Make sure we check for > + * pending entries for the case when the request got misplaced (e.g. > + * a transition from L2->L1 happened while processing Direct TLB flush > + * request or vice versa). kvm_hv_vcpu_flush_tlb() will not flush > + * anything if there are no requests in the corresponding buffer. > + */ > + if (to_hv_vcpu(vcpu)) > + kvm_make_request(KVM_REQ_HV_TLB_FLUSH, vcpu); > + > /* > * TODO: optimize unconditional TLB flush/MMU sync. A partial list of > * things to fix before this can be conditional: