From mboxrd@z Thu Jan 1 00:00:00 1970 From: Laszlo Ersek Subject: Re: [PATCH] KVM: x86: fix RSM into 64-bit protected mode, round 2 Date: Mon, 26 Oct 2015 16:43:25 +0100 Message-ID: <562E4A1D.7000900@redhat.com> References: <1445636635-32260-1-git-send-email-lersek@redhat.com> <20151026153715.GA31158@potion.brq.redhat.com> Mime-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: QUOTED-PRINTABLE Cc: kvm@vger.kernel.org, Paolo Bonzini , Jordan Justen , Michael Kinney , stable@vger.kernel.org To: =?UTF-8?B?UmFkaW0gS3LEjW3DocWZ?= Return-path: In-Reply-To: <20151026153715.GA31158@potion.brq.redhat.com> Sender: stable-owner@vger.kernel.org List-Id: kvm.vger.kernel.org On 10/26/15 16:37, Radim Kr=C4=8Dm=C3=A1=C5=99 wrote: > 2015-10-23 23:43+0200, Laszlo Ersek: >> Commit b10d92a54dac ("KVM: x86: fix RSM into 64-bit protected mode") >> reordered the rsm_load_seg_64() and rsm_enter_protected_mode() calls= , >> relative to each other. The argument that said commit made was corre= ct, >> however putting rsm_enter_protected_mode() first whole-sale violated= the >> following (correct) invariant from em_rsm(): >> >> * Get back to real mode, to prepare a safe state in which t= o load >> * CR0/CR3/CR4/EFER. Also this will ensure that addresses p= assed >> * to read_std/write_std are not virtual. >=20 > Nice catch. >=20 >> Namely, rsm_enter_protected_mode() may re-enable paging, *after* whi= ch >> >> rsm_load_seg_64() >> GET_SMSTATE() >> read_std() >> >> will try to interpret the (smbase + offset) address as a virtual one= =2E This >> will result in unexpected page faults being injected to the guest in >> response to the RSM instruction. >=20 > I think this is a good time to introduce the read_phys helper, which = we > wanted to avoid with that assumption. >=20 >> Split rsm_load_seg_64() in two parts: >> >> - The first part, rsm_stash_seg_64(), shall call GET_SMSTATE() while= in >> real mode, and save the relevant state off SMRAM into an array loc= al to >> rsm_load_state_64(). >> >> - The second part, rsm_load_seg_64(), shall occur after entering pro= tected >> mode, but the segment details shall come from the local array, not= the >> guest's SMRAM. >> >> Fixes: b10d92a54dac25a6152f1aa1ffc95c12908035ce >> Cc: Paolo Bonzini >> Cc: Radim Kr=C4=8Dm=C3=A1=C5=99 >> Cc: Jordan Justen >> Cc: Michael Kinney >> Cc: stable@vger.kernel.org >> Signed-off-by: Laszlo Ersek >> --- >=20 > The code would be cleaner if we had a different approach, but this wo= rks > too and is safer for stable. In case you prefer to leave the rewrite = for > a future victim, It's hard to express how much I prefer that. >=20 > Reviewed-by: Radim Kr=C4=8Dm=C3=A1=C5=99 >=20 Thank you! Laszlo