From: Paul Durrant <Paul.Durrant@citrix.com>
To: Xen-devel <xen-devel@lists.xen.org>
Cc: Kevin Tian <kevin.tian@intel.com>,
Jan Beulich <JBeulich@suse.com>,
Andrew Cooper <Andrew.Cooper3@citrix.com>,
Jun Nakajima <jun.nakajima@intel.com>,
Boris Ostrovsky <boris.ostrovsky@oracle.com>,
Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
Subject: Re: [PATCH 1/2] x86/hvm: Don't raise #GP behind the emulators back for CR accesses
Date: Thu, 2 Mar 2017 15:58:56 +0000 [thread overview]
Message-ID: <01c7a4d2edc34a5dbe3ab31823e5f01b@AMSPEX02CL03.citrite.net> (raw)
In-Reply-To: <1488466778-10163-1-git-send-email-andrew.cooper3@citrix.com>
> -----Original Message-----
> From: Andrew Cooper [mailto:andrew.cooper3@citrix.com]
> Sent: 02 March 2017 15:00
> To: Xen-devel <xen-devel@lists.xen.org>
> Cc: Andrew Cooper <Andrew.Cooper3@citrix.com>; Jan Beulich
> <JBeulich@suse.com>; Paul Durrant <Paul.Durrant@citrix.com>; Jun
> Nakajima <jun.nakajima@intel.com>; Kevin Tian <kevin.tian@intel.com>;
> Boris Ostrovsky <boris.ostrovsky@oracle.com>; Suravee Suthikulpanit
> <suravee.suthikulpanit@amd.com>
> Subject: [PATCH 1/2] x86/hvm: Don't raise #GP behind the emulators back
> for CR accesses
>
> hvm_set_cr{0,4}() are reachable from the emulator, but use
> hvm_inject_hw_exception() directly.
>
> Alter the API to make the callers of hvm_set_cr{0,3,4}() responsible for
> raising #GP, and apply this change to all existing callers.
>
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
emulate code...
Reviewed-by: Paul Durrant <paul.durrant@citrix.com>
> ---
> CC: Jan Beulich <JBeulich@suse.com>
> CC: Paul Durrant <paul.durrant@citrix.com>
> CC: Jun Nakajima <jun.nakajima@intel.com>
> CC: Kevin Tian <kevin.tian@intel.com>
> CC: Boris Ostrovsky <boris.ostrovsky@oracle.com>
> CC: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
>
> Issues identified which I am purposefully not fixing in this patch:
>
> (I will try to get around to them, but probably not in the 4.9 timeframe, at
> this point.)
>
> * hvm_set_cr3() doesn't handle bad 32bit PAE PDPTRs properly, as it doesn't
> actually have a path which raises #GP.
> * There is a lot of redundancy in our HVM CR setting routines, but not
> enough
> to trivially dedup at this point.
> * Both nested VT-x and SVM are liable raise #GP with L1, rather than failing
> the virtual vmentry/vmexit. This is not a change in behaviour, but is far
> more obvious now.
> * The hvm_do_resume() path for vm_event processing has the same bug as
> the
> MSR side, where exceptions are raised after %rip has moved forwards. This
> is also not a change in behaviour.
> ---
> xen/arch/x86/hvm/emulate.c | 24 ++++++++++++----
> xen/arch/x86/hvm/hvm.c | 59 ++++++++++++++++++++++--------------
> ---
> xen/arch/x86/hvm/svm/nestedsvm.c | 14 ++++++++++
> xen/arch/x86/hvm/vmx/vmx.c | 7 ++++-
> xen/arch/x86/hvm/vmx/vvmx.c | 29 +++++++++++++++----
> xen/include/asm-x86/hvm/support.h | 6 +++-
> 6 files changed, 101 insertions(+), 38 deletions(-)
>
> diff --git a/xen/arch/x86/hvm/emulate.c b/xen/arch/x86/hvm/emulate.c
> index 93782d0..1c66010 100644
> --- a/xen/arch/x86/hvm/emulate.c
> +++ b/xen/arch/x86/hvm/emulate.c
> @@ -1520,23 +1520,37 @@ static int hvmemul_write_cr(
> unsigned long val,
> struct x86_emulate_ctxt *ctxt)
> {
> + int rc;
> +
> HVMTRACE_LONG_2D(CR_WRITE, reg, TRC_PAR_LONG(val));
> switch ( reg )
> {
> case 0:
> - return hvm_set_cr0(val, 1);
> + rc = hvm_set_cr0(val, 1);
> + break;
> +
> case 2:
> current->arch.hvm_vcpu.guest_cr[2] = val;
> - return X86EMUL_OKAY;
> + rc = X86EMUL_OKAY;
> + break;
> +
> case 3:
> - return hvm_set_cr3(val, 1);
> + rc = hvm_set_cr3(val, 1);
> + break;
> +
> case 4:
> - return hvm_set_cr4(val, 1);
> + rc = hvm_set_cr4(val, 1);
> + break;
> +
> default:
> + rc = X86EMUL_UNHANDLEABLE;
> break;
> }
>
> - return X86EMUL_UNHANDLEABLE;
> + if ( rc == X86EMUL_EXCEPTION )
> + x86_emul_hw_exception(TRAP_gp_fault, 0, ctxt);
> +
> + return rc;
> }
>
> static int hvmemul_read_msr(
> diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
> index 7432c70..ccfae4f 100644
> --- a/xen/arch/x86/hvm/hvm.c
> +++ b/xen/arch/x86/hvm/hvm.c
> @@ -527,19 +527,25 @@ void hvm_do_resume(struct vcpu *v)
>
> if ( w->do_write.cr0 )
> {
> - hvm_set_cr0(w->cr0, 0);
> + if ( hvm_set_cr0(w->cr0, 0) == X86EMUL_EXCEPTION )
> + hvm_inject_hw_exception(TRAP_gp_fault, 0);
> +
> w->do_write.cr0 = 0;
> }
>
> if ( w->do_write.cr4 )
> {
> - hvm_set_cr4(w->cr4, 0);
> + if ( hvm_set_cr4(w->cr4, 0) == X86EMUL_EXCEPTION )
> + hvm_inject_hw_exception(TRAP_gp_fault, 0);
> +
> w->do_write.cr4 = 0;
> }
>
> if ( w->do_write.cr3 )
> {
> - hvm_set_cr3(w->cr3, 0);
> + if ( hvm_set_cr3(w->cr3, 0) == X86EMUL_EXCEPTION )
> + hvm_inject_hw_exception(TRAP_gp_fault, 0);
> +
> w->do_write.cr3 = 0;
> }
> }
> @@ -2068,6 +2074,7 @@ int hvm_mov_to_cr(unsigned int cr, unsigned int
> gpr)
> {
> struct vcpu *curr = current;
> unsigned long val, *reg;
> + int rc;
>
> if ( (reg = decode_register(gpr, guest_cpu_user_regs(), 0)) == NULL )
> {
> @@ -2082,16 +2089,20 @@ int hvm_mov_to_cr(unsigned int cr, unsigned int
> gpr)
> switch ( cr )
> {
> case 0:
> - return hvm_set_cr0(val, 1);
> + rc = hvm_set_cr0(val, 1);
> + break;
>
> case 3:
> - return hvm_set_cr3(val, 1);
> + rc = hvm_set_cr3(val, 1);
> + break;
>
> case 4:
> - return hvm_set_cr4(val, 1);
> + rc = hvm_set_cr4(val, 1);
> + break;
>
> case 8:
> vlapic_set_reg(vcpu_vlapic(curr), APIC_TASKPRI, ((val & 0x0f) << 4));
> + rc = X86EMUL_OKAY;
> break;
>
> default:
> @@ -2099,7 +2110,10 @@ int hvm_mov_to_cr(unsigned int cr, unsigned int
> gpr)
> goto exit_and_crash;
> }
>
> - return X86EMUL_OKAY;
> + if ( rc == X86EMUL_EXCEPTION )
> + hvm_inject_hw_exception(TRAP_gp_fault, 0);
> +
> + return rc;
>
> exit_and_crash:
> domain_crash(curr->domain);
> @@ -2199,7 +2213,7 @@ int hvm_set_cr0(unsigned long value, bool_t
> may_defer)
> HVM_DBG_LOG(DBG_LEVEL_1,
> "Guest attempts to set upper 32 bits in CR0: %lx",
> value);
> - goto gpf;
> + return X86EMUL_EXCEPTION;
> }
>
> value &= ~HVM_CR0_GUEST_RESERVED_BITS;
> @@ -2209,7 +2223,7 @@ int hvm_set_cr0(unsigned long value, bool_t
> may_defer)
>
> if ( !nestedhvm_vmswitch_in_progress(v) &&
> (value & (X86_CR0_PE | X86_CR0_PG)) == X86_CR0_PG )
> - goto gpf;
> + return X86EMUL_EXCEPTION;
>
> /* A pvh is not expected to change to real mode. */
> if ( is_pvh_domain(d) &&
> @@ -2217,7 +2231,7 @@ int hvm_set_cr0(unsigned long value, bool_t
> may_defer)
> {
> printk(XENLOG_G_WARNING
> "PVH attempting to turn off PE/PG. CR0:%lx\n", value);
> - goto gpf;
> + return X86EMUL_EXCEPTION;
> }
>
> if ( may_defer && unlikely(v->domain-
> >arch.monitor.write_ctrlreg_enabled &
> @@ -2243,7 +2257,7 @@ int hvm_set_cr0(unsigned long value, bool_t
> may_defer)
> !nestedhvm_vmswitch_in_progress(v) )
> {
> HVM_DBG_LOG(DBG_LEVEL_1, "Enable paging before PAE enable");
> - goto gpf;
> + return X86EMUL_EXCEPTION;
> }
> HVM_DBG_LOG(DBG_LEVEL_1, "Enabling long mode");
> v->arch.hvm_vcpu.guest_efer |= EFER_LMA;
> @@ -2276,7 +2290,7 @@ int hvm_set_cr0(unsigned long value, bool_t
> may_defer)
> {
> HVM_DBG_LOG(DBG_LEVEL_1, "Guest attempts to clear CR0.PG "
> "while CR4.PCIDE=1");
> - goto gpf;
> + return X86EMUL_EXCEPTION;
> }
>
> /* When CR0.PG is cleared, LMA is cleared immediately. */
> @@ -2310,10 +2324,6 @@ int hvm_set_cr0(unsigned long value, bool_t
> may_defer)
> }
>
> return X86EMUL_OKAY;
> -
> - gpf:
> - hvm_inject_hw_exception(TRAP_gp_fault, 0);
> - return X86EMUL_EXCEPTION;
> }
>
> int hvm_set_cr3(unsigned long value, bool_t may_defer)
> @@ -2373,7 +2383,7 @@ int hvm_set_cr4(unsigned long value, bool_t
> may_defer)
> HVM_DBG_LOG(DBG_LEVEL_1,
> "Guest attempts to set reserved bit in CR4: %lx",
> value);
> - goto gpf;
> + return X86EMUL_EXCEPTION;
> }
>
> if ( !(value & X86_CR4_PAE) )
> @@ -2382,12 +2392,12 @@ int hvm_set_cr4(unsigned long value, bool_t
> may_defer)
> {
> HVM_DBG_LOG(DBG_LEVEL_1, "Guest cleared CR4.PAE while "
> "EFER.LMA is set");
> - goto gpf;
> + return X86EMUL_EXCEPTION;
> }
> if ( is_pvh_vcpu(v) )
> {
> HVM_DBG_LOG(DBG_LEVEL_1, "32-bit PVH guest cleared CR4.PAE");
> - goto gpf;
> + return X86EMUL_EXCEPTION;
> }
> }
>
> @@ -2399,7 +2409,7 @@ int hvm_set_cr4(unsigned long value, bool_t
> may_defer)
> {
> HVM_DBG_LOG(DBG_LEVEL_1, "Guest attempts to change CR4.PCIDE
> from "
> "0 to 1 while either EFER.LMA=0 or CR3[11:0]!=000H");
> - goto gpf;
> + return X86EMUL_EXCEPTION;
> }
>
> if ( may_defer && unlikely(v->domain-
> >arch.monitor.write_ctrlreg_enabled &
> @@ -2434,10 +2444,6 @@ int hvm_set_cr4(unsigned long value, bool_t
> may_defer)
> }
>
> return X86EMUL_OKAY;
> -
> - gpf:
> - hvm_inject_hw_exception(TRAP_gp_fault, 0);
> - return X86EMUL_EXCEPTION;
> }
>
> bool_t hvm_virtual_to_linear_addr(
> @@ -3020,7 +3026,10 @@ void hvm_task_switch(
> if ( hvm_load_segment_selector(x86_seg_ldtr, tss.ldt, 0) )
> goto out;
>
> - if ( hvm_set_cr3(tss.cr3, 1) )
> + rc = hvm_set_cr3(tss.cr3, 1);
> + if ( rc == X86EMUL_EXCEPTION )
> + hvm_inject_hw_exception(TRAP_gp_fault, 0);
> + if ( rc != X86EMUL_OKAY )
> goto out;
>
> regs->rip = tss.eip;
> diff --git a/xen/arch/x86/hvm/svm/nestedsvm.c
> b/xen/arch/x86/hvm/svm/nestedsvm.c
> index f7b7ada..d4fc81f 100644
> --- a/xen/arch/x86/hvm/svm/nestedsvm.c
> +++ b/xen/arch/x86/hvm/svm/nestedsvm.c
> @@ -286,6 +286,8 @@ static int nsvm_vcpu_hostrestore(struct vcpu *v,
> struct cpu_user_regs *regs)
> /* CR4 */
> v->arch.hvm_vcpu.guest_cr[4] = n1vmcb->_cr4;
> rc = hvm_set_cr4(n1vmcb->_cr4, 1);
> + if ( rc == X86EMUL_EXCEPTION )
> + hvm_inject_hw_exception(TRAP_gp_fault, 0);
> if (rc != X86EMUL_OKAY)
> gdprintk(XENLOG_ERR, "hvm_set_cr4 failed, rc: %u\n", rc);
>
> @@ -295,6 +297,8 @@ static int nsvm_vcpu_hostrestore(struct vcpu *v,
> struct cpu_user_regs *regs)
> v->arch.hvm_vcpu.guest_cr[0] = n1vmcb->_cr0 | X86_CR0_PE;
> n1vmcb->rflags &= ~X86_EFLAGS_VM;
> rc = hvm_set_cr0(n1vmcb->_cr0 | X86_CR0_PE, 1);
> + if ( rc == X86EMUL_EXCEPTION )
> + hvm_inject_hw_exception(TRAP_gp_fault, 0);
> if (rc != X86EMUL_OKAY)
> gdprintk(XENLOG_ERR, "hvm_set_cr0 failed, rc: %u\n", rc);
> svm->ns_cr0 = v->arch.hvm_vcpu.guest_cr[0];
> @@ -321,6 +325,8 @@ static int nsvm_vcpu_hostrestore(struct vcpu *v,
> struct cpu_user_regs *regs)
> /* hvm_set_cr3() below sets v->arch.hvm_vcpu.guest_cr[3] for us. */
> }
> rc = hvm_set_cr3(n1vmcb->_cr3, 1);
> + if ( rc == X86EMUL_EXCEPTION )
> + hvm_inject_hw_exception(TRAP_gp_fault, 0);
> if (rc != X86EMUL_OKAY)
> gdprintk(XENLOG_ERR, "hvm_set_cr3 failed, rc: %u\n", rc);
>
> @@ -548,6 +554,8 @@ static int nsvm_vmcb_prepare4vmrun(struct vcpu *v,
> struct cpu_user_regs *regs)
> /* CR4 */
> v->arch.hvm_vcpu.guest_cr[4] = ns_vmcb->_cr4;
> rc = hvm_set_cr4(ns_vmcb->_cr4, 1);
> + if ( rc == X86EMUL_EXCEPTION )
> + hvm_inject_hw_exception(TRAP_gp_fault, 0);
> if (rc != X86EMUL_OKAY)
> gdprintk(XENLOG_ERR, "hvm_set_cr4 failed, rc: %u\n", rc);
>
> @@ -556,6 +564,8 @@ static int nsvm_vmcb_prepare4vmrun(struct vcpu *v,
> struct cpu_user_regs *regs)
> cr0 = nestedsvm_fpu_vmentry(svm->ns_cr0, ns_vmcb, n1vmcb, n2vmcb);
> v->arch.hvm_vcpu.guest_cr[0] = ns_vmcb->_cr0;
> rc = hvm_set_cr0(cr0, 1);
> + if ( rc == X86EMUL_EXCEPTION )
> + hvm_inject_hw_exception(TRAP_gp_fault, 0);
> if (rc != X86EMUL_OKAY)
> gdprintk(XENLOG_ERR, "hvm_set_cr0 failed, rc: %u\n", rc);
>
> @@ -572,6 +582,8 @@ static int nsvm_vmcb_prepare4vmrun(struct vcpu *v,
> struct cpu_user_regs *regs)
>
> /* hvm_set_cr3() below sets v->arch.hvm_vcpu.guest_cr[3] for us. */
> rc = hvm_set_cr3(ns_vmcb->_cr3, 1);
> + if ( rc == X86EMUL_EXCEPTION )
> + hvm_inject_hw_exception(TRAP_gp_fault, 0);
> if (rc != X86EMUL_OKAY)
> gdprintk(XENLOG_ERR, "hvm_set_cr3 failed, rc: %u\n", rc);
> } else if (paging_mode_hap(v->domain)) {
> @@ -584,6 +596,8 @@ static int nsvm_vmcb_prepare4vmrun(struct vcpu *v,
> struct cpu_user_regs *regs)
> */
> /* hvm_set_cr3() below sets v->arch.hvm_vcpu.guest_cr[3] for us. */
> rc = hvm_set_cr3(ns_vmcb->_cr3, 1);
> + if ( rc == X86EMUL_EXCEPTION )
> + hvm_inject_hw_exception(TRAP_gp_fault, 0);
> if (rc != X86EMUL_OKAY)
> gdprintk(XENLOG_ERR, "hvm_set_cr3 failed, rc: %u\n", rc);
> } else {
> diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
> index 5b1717d..c2cd12d 100644
> --- a/xen/arch/x86/hvm/vmx/vmx.c
> +++ b/xen/arch/x86/hvm/vmx/vmx.c
> @@ -2459,13 +2459,18 @@ static int vmx_cr_access(unsigned long
> exit_qualification)
> }
> case VMX_CONTROL_REG_ACCESS_TYPE_LMSW: {
> unsigned long value = curr->arch.hvm_vcpu.guest_cr[0];
> + int rc;
>
> /* LMSW can (1) set PE; (2) set or clear MP, EM, and TS. */
> value = (value & ~(X86_CR0_MP|X86_CR0_EM|X86_CR0_TS)) |
> (VMX_CONTROL_REG_ACCESS_DATA(exit_qualification) &
> (X86_CR0_PE|X86_CR0_MP|X86_CR0_EM|X86_CR0_TS));
> HVMTRACE_LONG_1D(LMSW, value);
> - return hvm_set_cr0(value, 1);
> +
> + if ( (rc = hvm_set_cr0(value, 1)) == X86EMUL_EXCEPTION )
> + hvm_inject_hw_exception(TRAP_gp_fault, 0);
> +
> + return rc;
> }
> default:
> BUG();
> diff --git a/xen/arch/x86/hvm/vmx/vvmx.c
> b/xen/arch/x86/hvm/vmx/vvmx.c
> index 0aef7a7..8a486f5 100644
> --- a/xen/arch/x86/hvm/vmx/vvmx.c
> +++ b/xen/arch/x86/hvm/vmx/vvmx.c
> @@ -1046,9 +1046,18 @@ static void load_shadow_guest_state(struct vcpu
> *v)
>
> nvcpu->guest_cr[0] = get_vvmcs(v, CR0_READ_SHADOW);
> nvcpu->guest_cr[4] = get_vvmcs(v, CR4_READ_SHADOW);
> - hvm_set_cr0(get_vvmcs(v, GUEST_CR0), 1);
> - hvm_set_cr4(get_vvmcs(v, GUEST_CR4), 1);
> - hvm_set_cr3(get_vvmcs(v, GUEST_CR3), 1);
> +
> + rc = hvm_set_cr0(get_vvmcs(v, GUEST_CR0), 1);
> + if ( rc == X86EMUL_EXCEPTION )
> + hvm_inject_hw_exception(TRAP_gp_fault, 0);
> +
> + rc = hvm_set_cr4(get_vvmcs(v, GUEST_CR4), 1);
> + if ( rc == X86EMUL_EXCEPTION )
> + hvm_inject_hw_exception(TRAP_gp_fault, 0);
> +
> + rc = hvm_set_cr3(get_vvmcs(v, GUEST_CR3), 1);
> + if ( rc == X86EMUL_EXCEPTION )
> + hvm_inject_hw_exception(TRAP_gp_fault, 0);
>
> control = get_vvmcs(v, VM_ENTRY_CONTROLS);
> if ( control & VM_ENTRY_LOAD_GUEST_PAT )
> @@ -1237,9 +1246,17 @@ static void load_vvmcs_host_state(struct vcpu *v)
> __vmwrite(vmcs_h2g_field[i].guest_field, r);
> }
>
> - hvm_set_cr0(get_vvmcs(v, HOST_CR0), 1);
> - hvm_set_cr4(get_vvmcs(v, HOST_CR4), 1);
> - hvm_set_cr3(get_vvmcs(v, HOST_CR3), 1);
> + rc = hvm_set_cr0(get_vvmcs(v, HOST_CR0), 1);
> + if ( rc == X86EMUL_EXCEPTION )
> + hvm_inject_hw_exception(TRAP_gp_fault, 0);
> +
> + rc = hvm_set_cr4(get_vvmcs(v, HOST_CR4), 1);
> + if ( rc == X86EMUL_EXCEPTION )
> + hvm_inject_hw_exception(TRAP_gp_fault, 0);
> +
> + rc = hvm_set_cr3(get_vvmcs(v, HOST_CR3), 1);
> + if ( rc == X86EMUL_EXCEPTION )
> + hvm_inject_hw_exception(TRAP_gp_fault, 0);
>
> control = get_vvmcs(v, VM_EXIT_CONTROLS);
> if ( control & VM_EXIT_LOAD_HOST_PAT )
> diff --git a/xen/include/asm-x86/hvm/support.h b/xen/include/asm-
> x86/hvm/support.h
> index cb41364..632eb90 100644
> --- a/xen/include/asm-x86/hvm/support.h
> +++ b/xen/include/asm-x86/hvm/support.h
> @@ -119,7 +119,11 @@ int __must_check hvm_handle_xsetbv(u32 index,
> u64 new_bv);
>
> void hvm_shadow_handle_cd(struct vcpu *v, unsigned long value);
>
> -/* These functions all return X86EMUL return codes. */
> +/*
> + * These functions all return X86EMUL return codes. For hvm_set_*(), the
> + * caller is responsible for injecting #GP[0] if X86EMUL_EXCEPTION is
> + * returned.
> + */
> int hvm_set_efer(uint64_t value);
> int hvm_set_cr0(unsigned long value, bool_t may_defer);
> int hvm_set_cr3(unsigned long value, bool_t may_defer);
> --
> 2.1.4
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel
next prev parent reply other threads:[~2017-03-02 15:58 UTC|newest]
Thread overview: 9+ messages / expand[flat|nested] mbox.gz Atom feed top
2017-03-02 14:59 [PATCH 1/2] x86/hvm: Don't raise #GP behind the emulators back for CR accesses Andrew Cooper
2017-03-02 14:59 ` [PATCH 2/2] x86/emul: Hold x86_emulate() to strict X86EMUL_EXCEPTION requirements Andrew Cooper
2017-03-03 10:18 ` Jan Beulich
2017-03-02 15:58 ` Paul Durrant [this message]
2017-03-02 16:06 ` [PATCH 1/2] x86/hvm: Don't raise #GP behind the emulators back for CR accesses Boris Ostrovsky
2017-03-03 8:02 ` Tian, Kevin
2017-03-03 10:16 ` Jan Beulich
2017-03-03 10:30 ` Andrew Cooper
2017-03-03 10:36 ` Jan Beulich
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=01c7a4d2edc34a5dbe3ab31823e5f01b@AMSPEX02CL03.citrite.net \
--to=paul.durrant@citrix.com \
--cc=Andrew.Cooper3@citrix.com \
--cc=JBeulich@suse.com \
--cc=boris.ostrovsky@oracle.com \
--cc=jun.nakajima@intel.com \
--cc=kevin.tian@intel.com \
--cc=suravee.suthikulpanit@amd.com \
--cc=xen-devel@lists.xen.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).