From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 35D02C28D13 for ; Thu, 25 Aug 2022 15:44:38 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S242883AbiHYPog (ORCPT ); Thu, 25 Aug 2022 11:44:36 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37572 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237660AbiHYPoc (ORCPT ); Thu, 25 Aug 2022 11:44:32 -0400 Received: from mail-pf1-x434.google.com (mail-pf1-x434.google.com [IPv6:2607:f8b0:4864:20::434]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id AEE3DAA3CF for ; Thu, 25 Aug 2022 08:44:31 -0700 (PDT) Received: by mail-pf1-x434.google.com with SMTP id y127so17195496pfy.5 for ; Thu, 25 Aug 2022 08:44:31 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:from:to:cc; bh=lkpFp7wJllz+fu6eLPih5PvFqPT3ByhDxfAjsVV14FA=; b=cxKhA2ZbI3qXBQ9J+dqf+awbKPsIqG7QTdGumRHtTPcq9s0h7GNthbM7hPWbG6mCDv 8/pYm7f6tUspB31Zwg2WJZ8uHQleKkQ52ZvYmhq8uqjErHNyUJkH/taj+xN8vVhjcQAi k0Y+3sLbmE1w88vY6ibS21WxKF8re4nG4jCA49WQUz1OEbLOE/5q5kmtWxPdfXILiamo HjRy/Pq1kccl8sQNSf/wNFM8XMtCEii4LQZSBp/WXn9XIuRug9EiQYvGcsDUp5sP6UpM T5gpUy3FYWnvpzmCpeLmlvSPowgKziH4/evG2QP/aS5qCpPmCCC/63gpDNzy1qvwhvLs N0Iw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:x-gm-message-state:from:to:cc; bh=lkpFp7wJllz+fu6eLPih5PvFqPT3ByhDxfAjsVV14FA=; b=GdgaJ82DX/YOj2yH5A36ZI+JthbzgLsJ77YpF9QV50LC9HTtwZCpKL8cNkoLdPFlMn A0nmfgjgRC+ShZ+HFW4Hm9M9WZGxZzn8WgeKOunMeTmJ5Una4egzBM0zBacIrK/0VhZd MYlm3B32CZ58tVS5z/qeKVmCnsFT9jbtPZYSzWFN7UF1EsqELUf196KYVIGc/cdVR2D5 N7RmqM7mSn+xSJyUw4EZ0XArvNH78nL3gRM4lwUJFbudlgJGuCeZHa1ZOTbsqNil9Qbg wqX14dIWpBypIVszc+HD26m6QbqEA/k0gjpU9R+AtwY2nFHgoiYORP22gWICpOJ/omUk SE/Q== X-Gm-Message-State: ACgBeo2sm19tggtG0IuEKIa8wFUNhHjPtQZCCDm1MuQLr7qTP2LF4lOt E5GgX9NoGZXPCbIHJakjFYcUNw== X-Google-Smtp-Source: AA6agR6pNtBQmLkgZI3GWUgKVHKnaml7ulQEiPPiXi3GubMmqdy7n2xLch86HRK5F1F+AbDo8czUnQ== X-Received: by 2002:a63:88c1:0:b0:42b:49b3:3a7f with SMTP id l184-20020a6388c1000000b0042b49b33a7fmr3400773pgd.64.1661442270940; Thu, 25 Aug 2022 08:44:30 -0700 (PDT) Received: from google.com (7.104.168.34.bc.googleusercontent.com. [34.168.104.7]) by smtp.gmail.com with ESMTPSA id z6-20020a63e106000000b0042a2777550dsm12434721pgh.47.2022.08.25.08.44.30 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 25 Aug 2022 08:44:30 -0700 (PDT) Date: Thu, 25 Aug 2022 15:44:26 +0000 From: Sean Christopherson To: Maxim Levitsky Cc: kvm@vger.kernel.org, Borislav Petkov , Dave Hansen , linux-kernel@vger.kernel.org, Wanpeng Li , Ingo Molnar , x86@kernel.org, Jim Mattson , Kees Cook , Thomas Gleixner , "H. Peter Anvin" , Joerg Roedel , Vitaly Kuznetsov , Paolo Bonzini Subject: Re: [PATCH v3 13/13] KVM: x86: emulator/smm: preserve interrupt shadow in SMRAM Message-ID: References: <20220803155011.43721-1-mlevitsk@redhat.com> <20220803155011.43721-14-mlevitsk@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, Aug 25, 2022, Maxim Levitsky wrote: > On Wed, 2022-08-24 at 23:50 +0000, Sean Christopherson wrote: > > On Wed, Aug 03, 2022, Maxim Levitsky wrote: > > > @@ -518,7 +519,8 @@ struct kvm_smram_state_32 { > > > u32 reserved1[62]; > > > u32 smbase; > > > u32 smm_revision; > > > - u32 reserved2[5]; > > > + u32 reserved2[4]; > > > + u32 int_shadow; /* KVM extension */ > > > > Looking at this with fresh(er) eyes, I agree with Jim: KVM shouldn't add its own > > fields in SMRAM. There's no need to use vmcb/vmcs memory either, just add fields > > in kvm_vcpu_arch to save/restore the state across SMI/RSM, and then borrow VMX's > > approach of supporting migration by adding flags to do out-of-band migration, > > e.g. KVM_STATE_NESTED_SMM_STI_BLOCKING and KVM_STATE_NESTED_SMM_MOV_SS_BLOCKING. > > > > /* SMM state that's not saved in SMRAM. */ > > struct { > > struct { > > u8 interruptibility; > > } smm; > > } nested; > > > > That'd finally give us an excuse to move nested_run_pending to common code too :-) > > > Paolo told me that he wants it to be done this way (save the state in the > smram). Paolo, what's the motivation for using SMRAM? I don't see any obvious advantage for KVM. QEMU apparently would need to migrate interrupt.shadow, but QEMU should be doing that anyways, no? > My first version of this patch was actually saving the state in kvm internal > state, I personally don't mind that much if to do it this way or another. > > But note that I can't use nested state - the int shadow thing has nothing to > do with nesting. Oh, duh. > I think that 'struct kvm_vcpu_events' is the right place for this, and in fact it already > has interrupt.shadow (which btw Qemu doesn't migrate...) > > My approach was to use upper 4 bits of 'interrupt.shadow' since it is hightly unlikely > that we will ever see more that 16 different interrupt shadows. Heh, unless we ensure STI+MOVSS are mutually exclusive... s/16/4, because KVM_X86_SHADOW_INT_* are currently treated as masks, not values. Pedantry aside, using interrupt.shadow definitely seems like the way to go. We wouldn't even technically need to use the upper four bits since the bits are KVM controlled and not hardware-defined, though I agree that using bits 5 and 6 would give us more flexibility if we ever need to convert the masks to values. > It would be a bit more clean to put it into the 'smi' substruct, but we already > have the 'triple_fault' afterwards > > (but I think that this was very recent addition - maybe it is not too late?) > > A new 'KVM_VCPUEVENT_VALID_SMM_SHADOW' flag can be added to the struct to indicate the > extra bits if you want. > > Best regards, > Maxim Levitsky > > >