From mboxrd@z Thu Jan 1 00:00:00 1970 From: Boris Ostrovsky Subject: Re: [PATCH 2/2] x86/hvm: Don't intercept #UD exceptions in general Date: Wed, 27 Jan 2016 13:49:06 -0500 Message-ID: <56A91122.50502@oracle.com> References: <1453918273-23008-1-git-send-email-andrew.cooper3@citrix.com> <1453918273-23008-2-git-send-email-andrew.cooper3@citrix.com> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii"; Format="flowed" Content-Transfer-Encoding: 7bit Return-path: In-Reply-To: <1453918273-23008-2-git-send-email-andrew.cooper3@citrix.com> List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Sender: xen-devel-bounces@lists.xen.org Errors-To: xen-devel-bounces@lists.xen.org To: Andrew Cooper , Xen-devel Cc: Suravee Suthikulpanit , Kevin Tian , Aravind Gopalakrishnan , Jun Nakajima , Jan Beulich List-Id: xen-devel@lists.xenproject.org On 01/27/2016 01:11 PM, Andrew Cooper wrote: > c/s 0f1cb96e "x86 hvm: Allow cross-vendor migration" caused HVM domains to > unconditionally intercept #UD exceptions. While cross-vendor migration is > cool as a demo, it is extremely niche. > > Intercepting #UD allows userspace code in a multi-vcpu guest to execute > arbitrary instructions in the x86 emulator by having one thread execute a ud2a > instruction, and having a second thread rewrite the instruction before the > emulator performs an instruction fetch. > > XSAs 105, 106 and 110 are all examples where guest userspace can use bugs in > the x86 emulator to compromise security of the domain, either by privilege > escalation or causing a crash. > > c/s 2d67a7a4 "x86: synchronize PCI config space access decoding" > introduced (amongst other things) a per-domain vendor, based on the guests > cpuid policy. > > Use the per-guest vendor to enable #UD interception only when a domain is > configured for a vendor different to the current hardware. (#UD interception > is also enabled if hvm_fep is specified on the Xen command line. This is a > debug-only option whose entire purpose is for testing the x86 emulator.) > > As a result, the overwhelming majority of usecases now have #UD interception > disabled, removing an attack surface for malicious guest userspace. > > Signed-off-by: Andrew Cooper > --- > CC: Jan Beulich > CC: Jun Nakajima > CC: Kevin Tian > CC: Boris Ostrovsky > CC: Suravee Suthikulpanit > CC: Aravind Gopalakrishnan > --- > xen/arch/x86/domctl.c | 18 ++++++++++++++++++ > xen/arch/x86/hvm/hvm.c | 6 ++---- > xen/arch/x86/hvm/svm/svm.c | 13 +++++++++++++ > xen/arch/x86/hvm/svm/vmcb.c | 1 + > xen/arch/x86/hvm/vmx/vmcs.c | 1 + > xen/arch/x86/hvm/vmx/vmx.c | 15 +++++++++++++++ > xen/include/asm-x86/hvm/hvm.h | 15 ++++++++++++++- > 7 files changed, 64 insertions(+), 5 deletions(-) > > diff --git a/xen/arch/x86/domctl.c b/xen/arch/x86/domctl.c > index 1d71216..1084e82 100644 > --- a/xen/arch/x86/domctl.c > +++ b/xen/arch/x86/domctl.c > @@ -65,8 +65,20 @@ static void update_domain_cpuid_info(struct domain *d, > .ecx = ctl->ecx > } > }; > + int old_vendor = d->arch.x86_vendor; > > d->arch.x86_vendor = get_cpu_vendor(vendor_id.str, gcv_guest); > + > + if ( is_hvm_domain(d) && (d->arch.x86_vendor != old_vendor) ) > + { > + struct vcpu *v; > + > + domain_pause(d); > + for_each_vcpu( d, v ) > + hvm_update_guest_vendor(v); > + domain_unpause(d); > + } > + > break; > } Not specific to this patch, but shouldn't we pause/unpause domain for the whole routine? > > @@ -707,6 +719,12 @@ long arch_do_domctl( > xen_domctl_cpuid_t *ctl = &domctl->u.cpuid; > cpuid_input_t *cpuid, *unused = NULL; > > + if ( d == currd ) /* no domain_pause() */ > + { > + ret = -EINVAL; > + break; > + } > + > for ( i = 0; i < MAX_CPUID_INPUT; i++ ) > { > cpuid = &d->arch.cpuids[i]; ... > > /* Xen command-line option to enable altp2m */ > diff --git a/xen/arch/x86/hvm/svm/svm.c b/xen/arch/x86/hvm/svm/svm.c > index 953e0b5..44a1250 100644 > --- a/xen/arch/x86/hvm/svm/svm.c > +++ b/xen/arch/x86/hvm/svm/svm.c > @@ -597,6 +597,18 @@ static void svm_update_guest_efer(struct vcpu *v) > vmcb_set_efer(vmcb, new_efer); > } > > +static void svm_update_guest_vendor(struct vcpu *v) > +{ > + struct arch_svm_struct *arch_svm = &v->arch.hvm_svm; > + struct vmcb_struct *vmcb = arch_svm->vmcb; > + > + if ( opt_hvm_fep || > + (v->domain->arch.x86_vendor != boot_cpu_data.x86_vendor) ) > + vmcb->_exception_intercepts |= (1U << TRAP_invalid_op); > + else > + vmcb->_exception_intercepts &= ~(1U << TRAP_invalid_op); > +} I think you need to clear clean bits here (at least bit 0). -boris