* [PATCH] KVM: x86: fix RSM into 64-bit protected mode, round 2
@ 2015-10-23 21:43 Laszlo Ersek
2015-10-26 15:37 ` Radim Krčmář
0 siblings, 1 reply; 6+ messages in thread
From: Laszlo Ersek @ 2015-10-23 21:43 UTC (permalink / raw)
To: kvm, lersek
Cc: Paolo Bonzini, Radim Kr�m��, Jordan Justen,
Michael Kinney, stable
Commit b10d92a54dac ("KVM: x86: fix RSM into 64-bit protected mode")
reordered the rsm_load_seg_64() and rsm_enter_protected_mode() calls,
relative to each other. The argument that said commit made was correct,
however putting rsm_enter_protected_mode() first whole-sale violated the
following (correct) invariant from em_rsm():
* Get back to real mode, to prepare a safe state in which to load
* CR0/CR3/CR4/EFER. Also this will ensure that addresses passed
* to read_std/write_std are not virtual.
Namely, rsm_enter_protected_mode() may re-enable paging, *after* which
rsm_load_seg_64()
GET_SMSTATE()
read_std()
will try to interpret the (smbase + offset) address as a virtual one. This
will result in unexpected page faults being injected to the guest in
response to the RSM instruction.
Split rsm_load_seg_64() in two parts:
- The first part, rsm_stash_seg_64(), shall call GET_SMSTATE() while in
real mode, and save the relevant state off SMRAM into an array local to
rsm_load_state_64().
- The second part, rsm_load_seg_64(), shall occur after entering protected
mode, but the segment details shall come from the local array, not the
guest's SMRAM.
Fixes: b10d92a54dac25a6152f1aa1ffc95c12908035ce
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Radim Kr�m�� <rkrcmar@redhat.com>
Cc: Jordan Justen <jordan.l.justen@intel.com>
Cc: Michael Kinney <michael.d.kinney@intel.com>
Cc: stable@vger.kernel.org
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
---
arch/x86/kvm/emulate.c | 37 ++++++++++++++++++++++++++++++-------
1 file changed, 30 insertions(+), 7 deletions(-)
diff --git a/arch/x86/kvm/emulate.c b/arch/x86/kvm/emulate.c
index 9da95b9..25e16b6 100644
--- a/arch/x86/kvm/emulate.c
+++ b/arch/x86/kvm/emulate.c
@@ -2311,7 +2311,16 @@ static int rsm_load_seg_32(struct x86_emulate_ctxt *ctxt, u64 smbase, int n)
return X86EMUL_CONTINUE;
}
-static int rsm_load_seg_64(struct x86_emulate_ctxt *ctxt, u64 smbase, int n)
+struct rsm_stashed_seg_64 {
+ u16 selector;
+ struct desc_struct desc;
+ u32 base3;
+};
+
+static int rsm_stash_seg_64(struct x86_emulate_ctxt *ctxt,
+ struct rsm_stashed_seg_64 *stash,
+ u64 smbase,
+ int n)
{
struct desc_struct desc;
int offset;
@@ -2326,10 +2335,20 @@ static int rsm_load_seg_64(struct x86_emulate_ctxt *ctxt, u64 smbase, int n)
set_desc_base(&desc, GET_SMSTATE(u32, smbase, offset + 8));
base3 = GET_SMSTATE(u32, smbase, offset + 12);
- ctxt->ops->set_segment(ctxt, selector, &desc, base3, n);
+ stash[n].selector = selector;
+ stash[n].desc = desc;
+ stash[n].base3 = base3;
return X86EMUL_CONTINUE;
}
+static inline void rsm_load_seg_64(struct x86_emulate_ctxt *ctxt,
+ struct rsm_stashed_seg_64 *stash,
+ int n)
+{
+ ctxt->ops->set_segment(ctxt, stash[n].selector, &stash[n].desc,
+ stash[n].base3, n);
+}
+
static int rsm_enter_protected_mode(struct x86_emulate_ctxt *ctxt,
u64 cr0, u64 cr4)
{
@@ -2419,6 +2438,7 @@ static int rsm_load_state_64(struct x86_emulate_ctxt *ctxt, u64 smbase)
u32 base3;
u16 selector;
int i, r;
+ struct rsm_stashed_seg_64 stash[6];
for (i = 0; i < 16; i++)
*reg_write(ctxt, i) = GET_SMSTATE(u64, smbase, 0x7ff8 - i * 8);
@@ -2460,15 +2480,18 @@ static int rsm_load_state_64(struct x86_emulate_ctxt *ctxt, u64 smbase)
dt.address = GET_SMSTATE(u64, smbase, 0x7e68);
ctxt->ops->set_gdt(ctxt, &dt);
+ for (i = 0; i < ARRAY_SIZE(stash); i++) {
+ r = rsm_stash_seg_64(ctxt, stash, smbase, i);
+ if (r != X86EMUL_CONTINUE)
+ return r;
+ }
+
r = rsm_enter_protected_mode(ctxt, cr0, cr4);
if (r != X86EMUL_CONTINUE)
return r;
- for (i = 0; i < 6; i++) {
- r = rsm_load_seg_64(ctxt, smbase, i);
- if (r != X86EMUL_CONTINUE)
- return r;
- }
+ for (i = 0; i < ARRAY_SIZE(stash); i++)
+ rsm_load_seg_64(ctxt, stash, i);
return X86EMUL_CONTINUE;
}
--
1.8.3.1
^ permalink raw reply related [flat|nested] 6+ messages in thread
* Re: [PATCH] KVM: x86: fix RSM into 64-bit protected mode, round 2
2015-10-23 21:43 [PATCH] KVM: x86: fix RSM into 64-bit protected mode, round 2 Laszlo Ersek
@ 2015-10-26 15:37 ` Radim Krčmář
2015-10-26 15:43 ` Laszlo Ersek
0 siblings, 1 reply; 6+ messages in thread
From: Radim Krčmář @ 2015-10-26 15:37 UTC (permalink / raw)
To: Laszlo Ersek; +Cc: kvm, Paolo Bonzini, Jordan Justen, Michael Kinney, stable
2015-10-23 23:43+0200, Laszlo Ersek:
> Commit b10d92a54dac ("KVM: x86: fix RSM into 64-bit protected mode")
> reordered the rsm_load_seg_64() and rsm_enter_protected_mode() calls,
> relative to each other. The argument that said commit made was correct,
> however putting rsm_enter_protected_mode() first whole-sale violated the
> following (correct) invariant from em_rsm():
>
> * Get back to real mode, to prepare a safe state in which to load
> * CR0/CR3/CR4/EFER. Also this will ensure that addresses passed
> * to read_std/write_std are not virtual.
Nice catch.
> Namely, rsm_enter_protected_mode() may re-enable paging, *after* which
>
> rsm_load_seg_64()
> GET_SMSTATE()
> read_std()
>
> will try to interpret the (smbase + offset) address as a virtual one. This
> will result in unexpected page faults being injected to the guest in
> response to the RSM instruction.
I think this is a good time to introduce the read_phys helper, which we
wanted to avoid with that assumption.
> Split rsm_load_seg_64() in two parts:
>
> - The first part, rsm_stash_seg_64(), shall call GET_SMSTATE() while in
> real mode, and save the relevant state off SMRAM into an array local to
> rsm_load_state_64().
>
> - The second part, rsm_load_seg_64(), shall occur after entering protected
> mode, but the segment details shall come from the local array, not the
> guest's SMRAM.
>
> Fixes: b10d92a54dac25a6152f1aa1ffc95c12908035ce
> Cc: Paolo Bonzini <pbonzini@redhat.com>
> Cc: Radim Krčmář <rkrcmar@redhat.com>
> Cc: Jordan Justen <jordan.l.justen@intel.com>
> Cc: Michael Kinney <michael.d.kinney@intel.com>
> Cc: stable@vger.kernel.org
> Signed-off-by: Laszlo Ersek <lersek@redhat.com>
> ---
The code would be cleaner if we had a different approach, but this works
too and is safer for stable. In case you prefer to leave the rewrite for
a future victim,
Reviewed-by: Radim Krčmář <rkrcmar@redhat.com>
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH] KVM: x86: fix RSM into 64-bit protected mode, round 2
2015-10-26 15:37 ` Radim Krčmář
@ 2015-10-26 15:43 ` Laszlo Ersek
2015-10-26 16:32 ` Paolo Bonzini
0 siblings, 1 reply; 6+ messages in thread
From: Laszlo Ersek @ 2015-10-26 15:43 UTC (permalink / raw)
To: Radim Krčmář
Cc: kvm, Paolo Bonzini, Jordan Justen, Michael Kinney, stable
On 10/26/15 16:37, Radim Krčmář wrote:
> 2015-10-23 23:43+0200, Laszlo Ersek:
>> Commit b10d92a54dac ("KVM: x86: fix RSM into 64-bit protected mode")
>> reordered the rsm_load_seg_64() and rsm_enter_protected_mode() calls,
>> relative to each other. The argument that said commit made was correct,
>> however putting rsm_enter_protected_mode() first whole-sale violated the
>> following (correct) invariant from em_rsm():
>>
>> * Get back to real mode, to prepare a safe state in which to load
>> * CR0/CR3/CR4/EFER. Also this will ensure that addresses passed
>> * to read_std/write_std are not virtual.
>
> Nice catch.
>
>> Namely, rsm_enter_protected_mode() may re-enable paging, *after* which
>>
>> rsm_load_seg_64()
>> GET_SMSTATE()
>> read_std()
>>
>> will try to interpret the (smbase + offset) address as a virtual one. This
>> will result in unexpected page faults being injected to the guest in
>> response to the RSM instruction.
>
> I think this is a good time to introduce the read_phys helper, which we
> wanted to avoid with that assumption.
>
>> Split rsm_load_seg_64() in two parts:
>>
>> - The first part, rsm_stash_seg_64(), shall call GET_SMSTATE() while in
>> real mode, and save the relevant state off SMRAM into an array local to
>> rsm_load_state_64().
>>
>> - The second part, rsm_load_seg_64(), shall occur after entering protected
>> mode, but the segment details shall come from the local array, not the
>> guest's SMRAM.
>>
>> Fixes: b10d92a54dac25a6152f1aa1ffc95c12908035ce
>> Cc: Paolo Bonzini <pbonzini@redhat.com>
>> Cc: Radim Krčmář <rkrcmar@redhat.com>
>> Cc: Jordan Justen <jordan.l.justen@intel.com>
>> Cc: Michael Kinney <michael.d.kinney@intel.com>
>> Cc: stable@vger.kernel.org
>> Signed-off-by: Laszlo Ersek <lersek@redhat.com>
>> ---
>
> The code would be cleaner if we had a different approach, but this works
> too and is safer for stable. In case you prefer to leave the rewrite for
> a future victim,
It's hard to express how much I prefer that.
>
> Reviewed-by: Radim Krčmář <rkrcmar@redhat.com>
>
Thank you!
Laszlo
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH] KVM: x86: fix RSM into 64-bit protected mode, round 2
2015-10-26 15:43 ` Laszlo Ersek
@ 2015-10-26 16:32 ` Paolo Bonzini
2015-10-30 15:40 ` Radim Krčmář
0 siblings, 1 reply; 6+ messages in thread
From: Paolo Bonzini @ 2015-10-26 16:32 UTC (permalink / raw)
To: Laszlo Ersek, Radim Krčmář
Cc: kvm, Jordan Justen, Michael Kinney, stable
On 26/10/2015 16:43, Laszlo Ersek wrote:
> > The code would be cleaner if we had a different approach, but this works
> > too and is safer for stable. In case you prefer to leave the rewrite for
> > a future victim,
>
> It's hard to express how much I prefer that.
Radim, if you want to have a try go ahead since I cannot apply the patch
until next Monday.
Paolo
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH] KVM: x86: fix RSM into 64-bit protected mode, round 2
2015-10-26 16:32 ` Paolo Bonzini
@ 2015-10-30 15:40 ` Radim Krčmář
2015-10-31 17:09 ` Laszlo Ersek
0 siblings, 1 reply; 6+ messages in thread
From: Radim Krčmář @ 2015-10-30 15:40 UTC (permalink / raw)
To: Paolo Bonzini; +Cc: Laszlo Ersek, kvm, Jordan Justen, Michael Kinney, stable
2015-10-26 17:32+0100, Paolo Bonzini:
> On 26/10/2015 16:43, Laszlo Ersek wrote:
>>> The code would be cleaner if we had a different approach, but this works
>>> too and is safer for stable. In case you prefer to leave the rewrite for
>>> a future victim,
>>
>> It's hard to express how much I prefer that.
>
> Radim, if you want to have a try go ahead since I cannot apply the patch
> until next Monday.
The future I originally had in mind was more hoverboardy, but a series
just landed, "KVM: x86: simplify RSM into 64-bit protected mode".
Laszlo, I'd be grateful if you could check that it works.
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH] KVM: x86: fix RSM into 64-bit protected mode, round 2
2015-10-30 15:40 ` Radim Krčmář
@ 2015-10-31 17:09 ` Laszlo Ersek
0 siblings, 0 replies; 6+ messages in thread
From: Laszlo Ersek @ 2015-10-31 17:09 UTC (permalink / raw)
To: Radim Krčmář, Paolo Bonzini
Cc: kvm, Jordan Justen, Michael Kinney, stable
On 10/30/15 16:40, Radim Krčmář wrote:
> 2015-10-26 17:32+0100, Paolo Bonzini:
>> On 26/10/2015 16:43, Laszlo Ersek wrote:
>>>> The code would be cleaner if we had a different approach, but this works
>>>> too and is safer for stable. In case you prefer to leave the rewrite for
>>>> a future victim,
>>>
>>> It's hard to express how much I prefer that.
>>
>> Radim, if you want to have a try go ahead since I cannot apply the patch
>> until next Monday.
>
> The future I originally had in mind was more hoverboardy, but a series
> just landed, "KVM: x86: simplify RSM into 64-bit protected mode".
>
> Laszlo, I'd be grateful if you could check that it works.
>
I'm tagging it for next week, thanks.
Laszlo
^ permalink raw reply [flat|nested] 6+ messages in thread
end of thread, other threads:[~2015-10-31 17:09 UTC | newest]
Thread overview: 6+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2015-10-23 21:43 [PATCH] KVM: x86: fix RSM into 64-bit protected mode, round 2 Laszlo Ersek
2015-10-26 15:37 ` Radim Krčmář
2015-10-26 15:43 ` Laszlo Ersek
2015-10-26 16:32 ` Paolo Bonzini
2015-10-30 15:40 ` Radim Krčmář
2015-10-31 17:09 ` Laszlo Ersek
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).