From: Paolo Bonzini <pbonzini@redhat.com>
To: "Radim Krčmář" <rkrcmar@redhat.com>
Cc: linux-kernel@vger.kernel.org, kvm@vger.kernel.org, bsd@redhat.com
Subject: Re: [PATCH 08/12] KVM: x86: save/load state on SMM switch
Date: Thu, 21 May 2015 23:21:18 +0200 [thread overview]
Message-ID: <555E4C4E.1010603@redhat.com> (raw)
In-Reply-To: <20150521170014.GB31171@potion.brq.redhat.com>
On 21/05/2015 19:00, Radim Krčmář wrote:
> Potentially, an NMI could be latched (while in SMM or upon exit) and
> serviced upon exit [...]
>
> This "Potentially" could be in the sense that the whole 3rd paragraph is
> only applicable to some ancient SMM design :)
It could also be in the sense that you cannot exclude an NMI coming at
exactly the wrong time.
If you want to go full language lawyer, it does mention it whenever
behavior is specific to a processor family.
> The 1st paragraph has quite clear sentence:
>
> If NMIs were blocked before the SMI occurred, they are blocked after
> execution of RSM.
>
> so I'd just ignore the 3rd paragraph ...
>
> And the APM 2:10.3.3 Exceptions and Interrupts
> NMI—If an NMI occurs while the processor is in SMM, it is latched by
> the processor, but the NMI handler is not invoked until the processor
> leaves SMM with the execution of an RSM instruction. A pending NMI
> causes the handler to be invoked immediately after the RSM completes
> and before the first instruction in the interrupted program is
> executed.
>
> An SMM handler can unmask NMI interrupts by simply executing an IRET.
> Upon completion of the IRET instruction, the processor recognizes the
> pending NMI, and transfers control to the NMI handler. Once an NMI is
> recognized within SMM using this technique, subsequent NMIs are
> recognized until SMM is exited. Later SMIs cause NMIs to be masked,
> until the SMM handler unmasks them.
>
> makes me think that we should unmask them unconditionally or that SMM
> doesn't do anything with NMI masking.
Actually I hadn't noticed this paragraph. But I read it the same as the
Intel manual (i.e. what I implemented): it doesn't say anywhere that RSM
may cause the processor to *set* the "NMIs masked" flag.
It makes no sense; as you said it's 1 bit of state! But it seems that
it's the architectural behavior. :(
> If we can choose, less NMI nesting seems like a good idea.
It would---I'm just preempting future patches from Nadav. :) That said,
even if OVMF does do IRETs in SMM (in 64-bit mode it fills in page
tables lazily for memory above 4GB), we do not care about asynchronous
SMIs such as those for power management. So we should never enter SMM
with NMIs masked, to begin with.
Paolo
next prev parent reply other threads:[~2015-05-21 21:21 UTC|newest]
Thread overview: 30+ messages / expand[flat|nested] mbox.gz Atom feed top
2015-05-08 11:20 [PATCH 00/12] KVM: x86: SMM support Paolo Bonzini
2015-05-08 11:20 ` [PATCH 01/12] KVM: export __gfn_to_pfn_memslot, drop gfn_to_pfn_async Paolo Bonzini
2015-05-08 11:20 ` [PATCH 02/12] KVM: x86: introduce num_emulated_msrs Paolo Bonzini
2015-05-08 11:20 ` [PATCH 03/12] KVM: remove unnecessary arg from mark_page_dirty_in_slot, export it Paolo Bonzini
2015-05-08 11:20 ` [PATCH 04/12] KVM: x86: pass host_initiated to functions that read MSRs Paolo Bonzini
2015-05-08 11:20 ` [PATCH 05/12] KVM: x86: pass the whole hflags field to emulator and back Paolo Bonzini
2015-05-08 11:20 ` [PATCH 06/12] KVM: x86: API changes for SMM support Paolo Bonzini
2015-05-21 14:49 ` Radim Krčmář
2015-05-21 14:59 ` Paolo Bonzini
2015-05-21 16:26 ` Radim Krčmář
2015-05-21 21:21 ` Paolo Bonzini
2015-05-08 11:20 ` [PATCH 07/12] KVM: x86: stubs " Paolo Bonzini
2015-05-21 14:55 ` Radim Krčmář
2015-05-08 11:20 ` [PATCH 08/12] KVM: x86: save/load state on SMM switch Paolo Bonzini
2015-05-21 16:20 ` Radim Krčmář
2015-05-21 16:21 ` Paolo Bonzini
2015-05-21 16:33 ` Radim Krčmář
2015-05-21 20:24 ` Paolo Bonzini
2015-05-22 13:13 ` Radim Krčmář
2015-05-21 16:23 ` Paolo Bonzini
2015-05-21 17:00 ` Radim Krčmář
2015-05-21 21:21 ` Paolo Bonzini [this message]
2015-05-22 14:17 ` Radim Krčmář
2015-05-25 12:46 ` Paolo Bonzini
2015-05-08 11:20 ` [PATCH 09/12] KVM: x86: add vcpu-specific functions to read/write/translate GFNs Paolo Bonzini
2015-05-08 11:20 ` [PATCH 10/12] KVM: x86: add SMM to the MMU role Paolo Bonzini
2015-05-08 11:20 ` [PATCH 11/12] KVM: x86: add KVM_MEM_X86_SMRAM memory slot flag Paolo Bonzini
2015-05-26 18:45 ` Avi Kivity
2015-05-27 9:26 ` Paolo Bonzini
2015-05-08 11:20 ` [PATCH 12/12] KVM: x86: advertise KVM_CAP_X86_SMM Paolo Bonzini
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=555E4C4E.1010603@redhat.com \
--to=pbonzini@redhat.com \
--cc=bsd@redhat.com \
--cc=kvm@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=rkrcmar@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).