From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.3 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_SANE_1 autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4D8B6C3A59E for ; Wed, 21 Aug 2019 15:18:48 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 28BC1216F4 for ; Wed, 21 Aug 2019 15:18:48 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728350AbfHUPSr (ORCPT ); Wed, 21 Aug 2019 11:18:47 -0400 Received: from mga09.intel.com ([134.134.136.24]:14540 "EHLO mga09.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727683AbfHUPSr (ORCPT ); Wed, 21 Aug 2019 11:18:47 -0400 X-Amp-Result: UNSCANNABLE X-Amp-File-Uploaded: False Received: from orsmga007.jf.intel.com ([10.7.209.58]) by orsmga102.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 21 Aug 2019 08:18:46 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.64,412,1559545200"; d="scan'208";a="169443350" Received: from sjchrist-coffee.jf.intel.com (HELO linux.intel.com) ([10.54.74.41]) by orsmga007.jf.intel.com with ESMTP; 21 Aug 2019 08:18:46 -0700 Date: Wed, 21 Aug 2019 08:18:46 -0700 From: Sean Christopherson To: Mihai =?utf-8?B?RG9uyJt1?= Cc: Nicusor CITU , Adalbert =?utf-8?B?TGF6xINy?= , "kvm@vger.kernel.org" , "linux-mm@kvack.org" , "virtualization@lists.linux-foundation.org" , Paolo Bonzini , Radim =?utf-8?B?S3LEjW3DocWZ?= , Konrad Rzeszutek Wilk , Tamas K Lengyel , Mathieu Tarral , Samuel =?iso-8859-1?Q?Laur=E9n?= , Patrick Colp , Jan Kiszka , Stefan Hajnoczi , Weijiang Yang , "Zhang@vger.kernel.org" , Yu C Subject: Re: [RFC PATCH v6 55/92] kvm: introspection: add KVMI_CONTROL_MSR and KVMI_EVENT_MSR Message-ID: <20190821151846.GD29345@linux.intel.com> References: <20190809160047.8319-1-alazar@bitdefender.com> <20190809160047.8319-56-alazar@bitdefender.com> <20190812210501.GD1437@linux.intel.com> <20190819183643.GB1916@linux.intel.com> <6854bfcc2bff3ffdaadad8708bd186a071ad682c.camel@bitdefender.com> <72df8b3ea66bb5bc7bb9c17e8bf12e12320358e1.camel@bitdefender.com> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: <72df8b3ea66bb5bc7bb9c17e8bf12e12320358e1.camel@bitdefender.com> User-Agent: Mutt/1.5.24 (2015-08-30) Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org On Tue, Aug 20, 2019 at 02:43:32PM +0300, Mihai Donțu wrote: > On Tue, 2019-08-20 at 08:44 +0000, Nicusor CITU wrote: > > > > > > +static void vmx_msr_intercept(struct kvm_vcpu *vcpu, unsigned > > > > > > int > > > > > > msr, > > > > > > + bool enable) > > > > > > +{ > > > > > > + struct vcpu_vmx *vmx = to_vmx(vcpu); > > > > > > + unsigned long *msr_bitmap = vmx->vmcs01.msr_bitmap; > > > > > > Is KVMI intended to play nice with nested virtualization? Unconditionally > > > updating vmcs01.msr_bitmap is correct regardless of whether the vCPU > > > is in L1 or L2, but if the vCPU is currently in L2 then the effective > > > bitmap, i.e. vmcs02.msr_bitmap, won't be updated until the next nested VM- > > > Enter. > > > > Our initial proof of concept was running with success in nested > > virtualization. But most of our tests were done on bare-metal. > > We do however intend to make it fully functioning on nested systems > > too. > > > > Even thought, from KVMI point of view, the MSR interception > > configuration would be just fine if it gets updated before the vcpu is > > actually entering to nested VM. > > > > I believe Sean is referring here to the case where the guest being > introspected is a hypervisor (eg. Windows 10 with device guard). Yep. > Even though we are looking at how to approach this scenario, the > introspection tools we have built will refuse to attach to a > hypervisor. In that case, it's probably a good idea to make KVMI mutually exclusive with nested virtualization. Doing so should, in theory, simplify the implementation and expedite upstreaming, e.g. reviewers don't have to nitpick edge cases related to nested virt. My only hesitation in disabling KVMI when nested virt is enabled is that it could make it much more difficult to (re)enable the combination in the future.