From: Paul Durrant <Paul.Durrant@citrix.com>
To: 'Alexandru Isaila' <aisaila@bitdefender.com>,
"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Cc: Ian Jackson <Ian.Jackson@citrix.com>,
Wei Liu <wei.liu2@citrix.com>,
"jbeulich@suse.com" <jbeulich@suse.com>,
Andrew Cooper <Andrew.Cooper3@citrix.com>
Subject: Re: [PATCH v12 03/11] x86/hvm: Introduce hvm_save_cpu_ctxt_one func
Date: Mon, 16 Jul 2018 15:29:44 +0000 [thread overview]
Message-ID: <492015aaf7164a48bc2a2e635ef0277d@AMSPEX02CL02.citrite.net> (raw)
In-Reply-To: <1531752937-10478-4-git-send-email-aisaila@bitdefender.com>
> -----Original Message-----
> From: Alexandru Isaila [mailto:aisaila@bitdefender.com]
> Sent: 16 July 2018 15:55
> To: xen-devel@lists.xen.org
> Cc: Ian Jackson <Ian.Jackson@citrix.com>; Wei Liu <wei.liu2@citrix.com>;
> jbeulich@suse.com; Andrew Cooper <Andrew.Cooper3@citrix.com>; Paul
> Durrant <Paul.Durrant@citrix.com>; Alexandru Isaila
> <aisaila@bitdefender.com>
> Subject: [PATCH v12 03/11] x86/hvm: Introduce hvm_save_cpu_ctxt_one
> func
>
> This is used to save data from a single instance.
>
> Signed-off-by: Alexandru Isaila <aisaila@bitdefender.com>
>
> ---
> Changes since V11:
> - hvm_save_cpu_ctxt() now returns err from
> hvm_save_cpu_ctxt_one().
> ---
> xen/arch/x86/hvm/hvm.c | 216 ++++++++++++++++++++++++++------------
> -----------
> 1 file changed, 113 insertions(+), 103 deletions(-)
>
> diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
> index dd88751..e20a25c 100644
> --- a/xen/arch/x86/hvm/hvm.c
> +++ b/xen/arch/x86/hvm/hvm.c
> @@ -787,119 +787,129 @@ static int hvm_load_tsc_adjust(struct domain *d,
> hvm_domain_context_t *h)
> HVM_REGISTER_SAVE_RESTORE(TSC_ADJUST, hvm_save_tsc_adjust,
> hvm_load_tsc_adjust, 1, HVMSR_PER_VCPU);
>
> +static int hvm_save_cpu_ctxt_one(struct vcpu *v, hvm_domain_context_t
> *h)
> +{
> + struct segment_register seg;
> + struct hvm_hw_cpu ctxt;
> +
> + memset(&ctxt, 0, sizeof(ctxt));
Why not use an = {} initializer instead of the memset here like elsewhere?
Paul
> +
> + /* Architecture-specific vmcs/vmcb bits */
> + hvm_funcs.save_cpu_ctxt(v, &ctxt);
> +
> + ctxt.tsc = hvm_get_guest_tsc_fixed(v, v->domain-
> >arch.hvm_domain.sync_tsc);
> +
> + ctxt.msr_tsc_aux = hvm_msr_tsc_aux(v);
> +
> + hvm_get_segment_register(v, x86_seg_idtr, &seg);
> + ctxt.idtr_limit = seg.limit;
> + ctxt.idtr_base = seg.base;
> +
> + hvm_get_segment_register(v, x86_seg_gdtr, &seg);
> + ctxt.gdtr_limit = seg.limit;
> + ctxt.gdtr_base = seg.base;
> +
> + hvm_get_segment_register(v, x86_seg_cs, &seg);
> + ctxt.cs_sel = seg.sel;
> + ctxt.cs_limit = seg.limit;
> + ctxt.cs_base = seg.base;
> + ctxt.cs_arbytes = seg.attr;
> +
> + hvm_get_segment_register(v, x86_seg_ds, &seg);
> + ctxt.ds_sel = seg.sel;
> + ctxt.ds_limit = seg.limit;
> + ctxt.ds_base = seg.base;
> + ctxt.ds_arbytes = seg.attr;
> +
> + hvm_get_segment_register(v, x86_seg_es, &seg);
> + ctxt.es_sel = seg.sel;
> + ctxt.es_limit = seg.limit;
> + ctxt.es_base = seg.base;
> + ctxt.es_arbytes = seg.attr;
> +
> + hvm_get_segment_register(v, x86_seg_ss, &seg);
> + ctxt.ss_sel = seg.sel;
> + ctxt.ss_limit = seg.limit;
> + ctxt.ss_base = seg.base;
> + ctxt.ss_arbytes = seg.attr;
> +
> + hvm_get_segment_register(v, x86_seg_fs, &seg);
> + ctxt.fs_sel = seg.sel;
> + ctxt.fs_limit = seg.limit;
> + ctxt.fs_base = seg.base;
> + ctxt.fs_arbytes = seg.attr;
> +
> + hvm_get_segment_register(v, x86_seg_gs, &seg);
> + ctxt.gs_sel = seg.sel;
> + ctxt.gs_limit = seg.limit;
> + ctxt.gs_base = seg.base;
> + ctxt.gs_arbytes = seg.attr;
> +
> + hvm_get_segment_register(v, x86_seg_tr, &seg);
> + ctxt.tr_sel = seg.sel;
> + ctxt.tr_limit = seg.limit;
> + ctxt.tr_base = seg.base;
> + ctxt.tr_arbytes = seg.attr;
> +
> + hvm_get_segment_register(v, x86_seg_ldtr, &seg);
> + ctxt.ldtr_sel = seg.sel;
> + ctxt.ldtr_limit = seg.limit;
> + ctxt.ldtr_base = seg.base;
> + ctxt.ldtr_arbytes = seg.attr;
> +
> + if ( v->fpu_initialised )
> + {
> + memcpy(ctxt.fpu_regs, v->arch.fpu_ctxt, sizeof(ctxt.fpu_regs));
> + ctxt.flags = XEN_X86_FPU_INITIALISED;
> + }
> +
> + ctxt.rax = v->arch.user_regs.rax;
> + ctxt.rbx = v->arch.user_regs.rbx;
> + ctxt.rcx = v->arch.user_regs.rcx;
> + ctxt.rdx = v->arch.user_regs.rdx;
> + ctxt.rbp = v->arch.user_regs.rbp;
> + ctxt.rsi = v->arch.user_regs.rsi;
> + ctxt.rdi = v->arch.user_regs.rdi;
> + ctxt.rsp = v->arch.user_regs.rsp;
> + ctxt.rip = v->arch.user_regs.rip;
> + ctxt.rflags = v->arch.user_regs.rflags;
> + ctxt.r8 = v->arch.user_regs.r8;
> + ctxt.r9 = v->arch.user_regs.r9;
> + ctxt.r10 = v->arch.user_regs.r10;
> + ctxt.r11 = v->arch.user_regs.r11;
> + ctxt.r12 = v->arch.user_regs.r12;
> + ctxt.r13 = v->arch.user_regs.r13;
> + ctxt.r14 = v->arch.user_regs.r14;
> + ctxt.r15 = v->arch.user_regs.r15;
> + ctxt.dr0 = v->arch.debugreg[0];
> + ctxt.dr1 = v->arch.debugreg[1];
> + ctxt.dr2 = v->arch.debugreg[2];
> + ctxt.dr3 = v->arch.debugreg[3];
> + ctxt.dr6 = v->arch.debugreg[6];
> + ctxt.dr7 = v->arch.debugreg[7];
> +
> + return hvm_save_entry(CPU, v->vcpu_id, h, &ctxt);
> +}
> +
> static int hvm_save_cpu_ctxt(struct domain *d, hvm_domain_context_t *h)
> {
> struct vcpu *v;
> - struct hvm_hw_cpu ctxt;
> - struct segment_register seg;
> + int err = 0;
>
> for_each_vcpu ( d, v )
> {
> - /* We don't need to save state for a vcpu that is down; the restore
> - * code will leave it down if there is nothing saved. */
> + /*
> + * We don't need to save state for a vcpu that is down; the restore
> + * code will leave it down if there is nothing saved.
> + */
> if ( v->pause_flags & VPF_down )
> continue;
>
> - memset(&ctxt, 0, sizeof(ctxt));
> -
> - /* Architecture-specific vmcs/vmcb bits */
> - hvm_funcs.save_cpu_ctxt(v, &ctxt);
> -
> - ctxt.tsc = hvm_get_guest_tsc_fixed(v, d->arch.hvm_domain.sync_tsc);
> -
> - ctxt.msr_tsc_aux = hvm_msr_tsc_aux(v);
> -
> - hvm_get_segment_register(v, x86_seg_idtr, &seg);
> - ctxt.idtr_limit = seg.limit;
> - ctxt.idtr_base = seg.base;
> -
> - hvm_get_segment_register(v, x86_seg_gdtr, &seg);
> - ctxt.gdtr_limit = seg.limit;
> - ctxt.gdtr_base = seg.base;
> -
> - hvm_get_segment_register(v, x86_seg_cs, &seg);
> - ctxt.cs_sel = seg.sel;
> - ctxt.cs_limit = seg.limit;
> - ctxt.cs_base = seg.base;
> - ctxt.cs_arbytes = seg.attr;
> -
> - hvm_get_segment_register(v, x86_seg_ds, &seg);
> - ctxt.ds_sel = seg.sel;
> - ctxt.ds_limit = seg.limit;
> - ctxt.ds_base = seg.base;
> - ctxt.ds_arbytes = seg.attr;
> -
> - hvm_get_segment_register(v, x86_seg_es, &seg);
> - ctxt.es_sel = seg.sel;
> - ctxt.es_limit = seg.limit;
> - ctxt.es_base = seg.base;
> - ctxt.es_arbytes = seg.attr;
> -
> - hvm_get_segment_register(v, x86_seg_ss, &seg);
> - ctxt.ss_sel = seg.sel;
> - ctxt.ss_limit = seg.limit;
> - ctxt.ss_base = seg.base;
> - ctxt.ss_arbytes = seg.attr;
> -
> - hvm_get_segment_register(v, x86_seg_fs, &seg);
> - ctxt.fs_sel = seg.sel;
> - ctxt.fs_limit = seg.limit;
> - ctxt.fs_base = seg.base;
> - ctxt.fs_arbytes = seg.attr;
> -
> - hvm_get_segment_register(v, x86_seg_gs, &seg);
> - ctxt.gs_sel = seg.sel;
> - ctxt.gs_limit = seg.limit;
> - ctxt.gs_base = seg.base;
> - ctxt.gs_arbytes = seg.attr;
> -
> - hvm_get_segment_register(v, x86_seg_tr, &seg);
> - ctxt.tr_sel = seg.sel;
> - ctxt.tr_limit = seg.limit;
> - ctxt.tr_base = seg.base;
> - ctxt.tr_arbytes = seg.attr;
> -
> - hvm_get_segment_register(v, x86_seg_ldtr, &seg);
> - ctxt.ldtr_sel = seg.sel;
> - ctxt.ldtr_limit = seg.limit;
> - ctxt.ldtr_base = seg.base;
> - ctxt.ldtr_arbytes = seg.attr;
> -
> - if ( v->fpu_initialised )
> - {
> - memcpy(ctxt.fpu_regs, v->arch.fpu_ctxt, sizeof(ctxt.fpu_regs));
> - ctxt.flags = XEN_X86_FPU_INITIALISED;
> - }
> -
> - ctxt.rax = v->arch.user_regs.rax;
> - ctxt.rbx = v->arch.user_regs.rbx;
> - ctxt.rcx = v->arch.user_regs.rcx;
> - ctxt.rdx = v->arch.user_regs.rdx;
> - ctxt.rbp = v->arch.user_regs.rbp;
> - ctxt.rsi = v->arch.user_regs.rsi;
> - ctxt.rdi = v->arch.user_regs.rdi;
> - ctxt.rsp = v->arch.user_regs.rsp;
> - ctxt.rip = v->arch.user_regs.rip;
> - ctxt.rflags = v->arch.user_regs.rflags;
> - ctxt.r8 = v->arch.user_regs.r8;
> - ctxt.r9 = v->arch.user_regs.r9;
> - ctxt.r10 = v->arch.user_regs.r10;
> - ctxt.r11 = v->arch.user_regs.r11;
> - ctxt.r12 = v->arch.user_regs.r12;
> - ctxt.r13 = v->arch.user_regs.r13;
> - ctxt.r14 = v->arch.user_regs.r14;
> - ctxt.r15 = v->arch.user_regs.r15;
> - ctxt.dr0 = v->arch.debugreg[0];
> - ctxt.dr1 = v->arch.debugreg[1];
> - ctxt.dr2 = v->arch.debugreg[2];
> - ctxt.dr3 = v->arch.debugreg[3];
> - ctxt.dr6 = v->arch.debugreg[6];
> - ctxt.dr7 = v->arch.debugreg[7];
> -
> - if ( hvm_save_entry(CPU, v->vcpu_id, h, &ctxt) != 0 )
> - return 1;
> + err = hvm_save_cpu_ctxt_one(v, h);
> + if ( err )
> + break;
> }
> - return 0;
> + return err;
> }
>
> /* Return a string indicating the error, or NULL for valid. */
> --
> 2.7.4
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
next prev parent reply other threads:[~2018-07-16 15:29 UTC|newest]
Thread overview: 17+ messages / expand[flat|nested] mbox.gz Atom feed top
2018-07-16 14:55 [PATCH v12 00/11] x86/domctl: Save info for one vcpu instance Alexandru Isaila
2018-07-16 14:55 ` [PATCH v12 01/11] x86/cpu: Introduce vmce_save_vcpu_ctxt_one() func Alexandru Isaila
2018-07-16 14:55 ` [PATCH v12 02/11] x86/hvm: Introduce hvm_save_tsc_adjust_one() func Alexandru Isaila
2018-07-16 14:55 ` [PATCH v12 03/11] x86/hvm: Introduce hvm_save_cpu_ctxt_one func Alexandru Isaila
2018-07-16 15:29 ` Paul Durrant [this message]
2018-07-17 12:25 ` Isaila Alexandru
2018-07-17 14:03 ` Jan Beulich
2018-07-17 14:57 ` Isaila Alexandru
2018-07-16 14:55 ` [PATCH v12 04/11] x86/hvm: Introduce hvm_save_cpu_xsave_states_one Alexandru Isaila
2018-07-16 14:55 ` [PATCH v12 05/11] x86/hvm: Introduce hvm_save_cpu_msrs_one func Alexandru Isaila
2018-07-16 14:55 ` [PATCH v12 06/11] x86/hvm: Introduce hvm_save_mtrr_msr_one func Alexandru Isaila
2018-07-16 14:55 ` [PATCH v12 07/11] x86/hvm: Introduce viridian_save_vcpu_ctxt_one() func Alexandru Isaila
2018-07-16 15:34 ` Paul Durrant
2018-07-16 14:55 ` [PATCH v12 08/11] x86/hvm: Add handler for save_one funcs Alexandru Isaila
2018-07-16 14:55 ` [PATCH v12 09/11] x86/domctl: Don't pause the whole domain if only getting vcpu state Alexandru Isaila
2018-07-16 14:55 ` [PATCH v12 10/11] x86/hvm: Remove redundant save functions Alexandru Isaila
2018-07-16 14:55 ` [PATCH v12 11/11] x86/hvm: Remove save_one handler Alexandru Isaila
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=492015aaf7164a48bc2a2e635ef0277d@AMSPEX02CL02.citrite.net \
--to=paul.durrant@citrix.com \
--cc=Andrew.Cooper3@citrix.com \
--cc=Ian.Jackson@citrix.com \
--cc=aisaila@bitdefender.com \
--cc=jbeulich@suse.com \
--cc=wei.liu2@citrix.com \
--cc=xen-devel@lists.xen.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).