* [PATCH v12 00/11] x86/domctl: Save info for one vcpu instance
@ 2018-07-16 14:55 Alexandru Isaila
2018-07-16 14:55 ` [PATCH v12 01/11] x86/cpu: Introduce vmce_save_vcpu_ctxt_one() func Alexandru Isaila
` (10 more replies)
0 siblings, 11 replies; 17+ messages in thread
From: Alexandru Isaila @ 2018-07-16 14:55 UTC (permalink / raw)
To: xen-devel; +Cc: wei.liu2, paul.durrant, ian.jackson, jbeulich, andrew.cooper3
Hi all,
This patch series addresses the ideea of saving data from a single vcpu instance.
First it starts by adding *save_one functions, then it introduces a handler for the
new save_one* funcs and makes use of it in the hvm_save and hvm_save_one funcs.
The final 2 patches are used for clean up. The first one removes the save* funcs and
renames the save_one* to save.
The last patch removes the save_one* handler that is no longer used.
Cheers,
Alexandru Isaila (11):
x86/cpu: Introduce vmce_save_vcpu_ctxt_one() func
x86/hvm: Introduce hvm_save_tsc_adjust_one() func
x86/hvm: Introduce hvm_save_cpu_ctxt_one func
x86/hvm: Introduce hvm_save_cpu_xsave_states_one
x86/hvm: Introduce hvm_save_cpu_msrs_one func
x86/hvm: Introduce hvm_save_mtrr_msr_one func
x86/hvm: Introduce viridian_save_vcpu_ctxt_one()
x86/hvm: Add handler for save_one funcs
x86/domctl: Don't pause the whole domain if only
x86/hvm: Remove redundant save functions
x86/hvm: Remove save_one handler
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
^ permalink raw reply [flat|nested] 17+ messages in thread
* [PATCH v12 01/11] x86/cpu: Introduce vmce_save_vcpu_ctxt_one() func
2018-07-16 14:55 [PATCH v12 00/11] x86/domctl: Save info for one vcpu instance Alexandru Isaila
@ 2018-07-16 14:55 ` Alexandru Isaila
2018-07-16 14:55 ` [PATCH v12 02/11] x86/hvm: Introduce hvm_save_tsc_adjust_one() func Alexandru Isaila
` (9 subsequent siblings)
10 siblings, 0 replies; 17+ messages in thread
From: Alexandru Isaila @ 2018-07-16 14:55 UTC (permalink / raw)
To: xen-devel
Cc: wei.liu2, andrew.cooper3, ian.jackson, paul.durrant, jbeulich,
Alexandru Isaila
This is used to save data from a single instance.
Signed-off-by: Alexandru Isaila <aisaila@bitdefender.com>
---
Changes since V11:
- Removed the memset and added init with {}.
---
xen/arch/x86/cpu/mcheck/vmce.c | 21 +++++++++++++--------
1 file changed, 13 insertions(+), 8 deletions(-)
diff --git a/xen/arch/x86/cpu/mcheck/vmce.c b/xen/arch/x86/cpu/mcheck/vmce.c
index e07cd2f..31e553c 100644
--- a/xen/arch/x86/cpu/mcheck/vmce.c
+++ b/xen/arch/x86/cpu/mcheck/vmce.c
@@ -349,6 +349,18 @@ int vmce_wrmsr(uint32_t msr, uint64_t val)
return ret;
}
+static int vmce_save_vcpu_ctxt_one(struct vcpu *v, hvm_domain_context_t *h)
+{
+ struct hvm_vmce_vcpu ctxt = {
+ .caps = v->arch.vmce.mcg_cap,
+ .mci_ctl2_bank0 = v->arch.vmce.bank[0].mci_ctl2,
+ .mci_ctl2_bank1 = v->arch.vmce.bank[1].mci_ctl2,
+ .mcg_ext_ctl = v->arch.vmce.mcg_ext_ctl,
+ };
+
+ return hvm_save_entry(VMCE_VCPU, v->vcpu_id, h, &ctxt);
+}
+
static int vmce_save_vcpu_ctxt(struct domain *d, hvm_domain_context_t *h)
{
struct vcpu *v;
@@ -356,14 +368,7 @@ static int vmce_save_vcpu_ctxt(struct domain *d, hvm_domain_context_t *h)
for_each_vcpu ( d, v )
{
- struct hvm_vmce_vcpu ctxt = {
- .caps = v->arch.vmce.mcg_cap,
- .mci_ctl2_bank0 = v->arch.vmce.bank[0].mci_ctl2,
- .mci_ctl2_bank1 = v->arch.vmce.bank[1].mci_ctl2,
- .mcg_ext_ctl = v->arch.vmce.mcg_ext_ctl,
- };
-
- err = hvm_save_entry(VMCE_VCPU, v->vcpu_id, h, &ctxt);
+ err = vmce_save_vcpu_ctxt_one(v, h);
if ( err )
break;
}
--
2.7.4
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
^ permalink raw reply related [flat|nested] 17+ messages in thread
* [PATCH v12 02/11] x86/hvm: Introduce hvm_save_tsc_adjust_one() func
2018-07-16 14:55 [PATCH v12 00/11] x86/domctl: Save info for one vcpu instance Alexandru Isaila
2018-07-16 14:55 ` [PATCH v12 01/11] x86/cpu: Introduce vmce_save_vcpu_ctxt_one() func Alexandru Isaila
@ 2018-07-16 14:55 ` Alexandru Isaila
2018-07-16 14:55 ` [PATCH v12 03/11] x86/hvm: Introduce hvm_save_cpu_ctxt_one func Alexandru Isaila
` (8 subsequent siblings)
10 siblings, 0 replies; 17+ messages in thread
From: Alexandru Isaila @ 2018-07-16 14:55 UTC (permalink / raw)
To: xen-devel
Cc: wei.liu2, andrew.cooper3, ian.jackson, paul.durrant, jbeulich,
Alexandru Isaila
This is used to save data from a single instance.
Signed-off-by: Alexandru Isaila <aisaila@bitdefender.com>
---
Changes since V9:
- Change return of the save_one func to return hvm_save_entry.
---
xen/arch/x86/hvm/hvm.c | 13 ++++++++++---
1 file changed, 10 insertions(+), 3 deletions(-)
diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 93092d2..dd88751 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -740,16 +740,23 @@ void hvm_domain_destroy(struct domain *d)
destroy_vpci_mmcfg(d);
}
+static int hvm_save_tsc_adjust_one(struct vcpu *v, hvm_domain_context_t *h)
+{
+ struct hvm_tsc_adjust ctxt = {};
+
+ ctxt.tsc_adjust = v->arch.hvm_vcpu.msr_tsc_adjust;
+
+ return hvm_save_entry(TSC_ADJUST, v->vcpu_id, h, &ctxt);
+}
+
static int hvm_save_tsc_adjust(struct domain *d, hvm_domain_context_t *h)
{
struct vcpu *v;
- struct hvm_tsc_adjust ctxt;
int err = 0;
for_each_vcpu ( d, v )
{
- ctxt.tsc_adjust = v->arch.hvm_vcpu.msr_tsc_adjust;
- err = hvm_save_entry(TSC_ADJUST, v->vcpu_id, h, &ctxt);
+ err = hvm_save_tsc_adjust_one(v, h);
if ( err )
break;
}
--
2.7.4
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
^ permalink raw reply related [flat|nested] 17+ messages in thread
* [PATCH v12 03/11] x86/hvm: Introduce hvm_save_cpu_ctxt_one func
2018-07-16 14:55 [PATCH v12 00/11] x86/domctl: Save info for one vcpu instance Alexandru Isaila
2018-07-16 14:55 ` [PATCH v12 01/11] x86/cpu: Introduce vmce_save_vcpu_ctxt_one() func Alexandru Isaila
2018-07-16 14:55 ` [PATCH v12 02/11] x86/hvm: Introduce hvm_save_tsc_adjust_one() func Alexandru Isaila
@ 2018-07-16 14:55 ` Alexandru Isaila
2018-07-16 15:29 ` Paul Durrant
2018-07-16 14:55 ` [PATCH v12 04/11] x86/hvm: Introduce hvm_save_cpu_xsave_states_one Alexandru Isaila
` (7 subsequent siblings)
10 siblings, 1 reply; 17+ messages in thread
From: Alexandru Isaila @ 2018-07-16 14:55 UTC (permalink / raw)
To: xen-devel
Cc: wei.liu2, andrew.cooper3, ian.jackson, paul.durrant, jbeulich,
Alexandru Isaila
This is used to save data from a single instance.
Signed-off-by: Alexandru Isaila <aisaila@bitdefender.com>
---
Changes since V11:
- hvm_save_cpu_ctxt() now returns err from
hvm_save_cpu_ctxt_one().
---
xen/arch/x86/hvm/hvm.c | 216 ++++++++++++++++++++++++++-----------------------
1 file changed, 113 insertions(+), 103 deletions(-)
diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index dd88751..e20a25c 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -787,119 +787,129 @@ static int hvm_load_tsc_adjust(struct domain *d, hvm_domain_context_t *h)
HVM_REGISTER_SAVE_RESTORE(TSC_ADJUST, hvm_save_tsc_adjust,
hvm_load_tsc_adjust, 1, HVMSR_PER_VCPU);
+static int hvm_save_cpu_ctxt_one(struct vcpu *v, hvm_domain_context_t *h)
+{
+ struct segment_register seg;
+ struct hvm_hw_cpu ctxt;
+
+ memset(&ctxt, 0, sizeof(ctxt));
+
+ /* Architecture-specific vmcs/vmcb bits */
+ hvm_funcs.save_cpu_ctxt(v, &ctxt);
+
+ ctxt.tsc = hvm_get_guest_tsc_fixed(v, v->domain->arch.hvm_domain.sync_tsc);
+
+ ctxt.msr_tsc_aux = hvm_msr_tsc_aux(v);
+
+ hvm_get_segment_register(v, x86_seg_idtr, &seg);
+ ctxt.idtr_limit = seg.limit;
+ ctxt.idtr_base = seg.base;
+
+ hvm_get_segment_register(v, x86_seg_gdtr, &seg);
+ ctxt.gdtr_limit = seg.limit;
+ ctxt.gdtr_base = seg.base;
+
+ hvm_get_segment_register(v, x86_seg_cs, &seg);
+ ctxt.cs_sel = seg.sel;
+ ctxt.cs_limit = seg.limit;
+ ctxt.cs_base = seg.base;
+ ctxt.cs_arbytes = seg.attr;
+
+ hvm_get_segment_register(v, x86_seg_ds, &seg);
+ ctxt.ds_sel = seg.sel;
+ ctxt.ds_limit = seg.limit;
+ ctxt.ds_base = seg.base;
+ ctxt.ds_arbytes = seg.attr;
+
+ hvm_get_segment_register(v, x86_seg_es, &seg);
+ ctxt.es_sel = seg.sel;
+ ctxt.es_limit = seg.limit;
+ ctxt.es_base = seg.base;
+ ctxt.es_arbytes = seg.attr;
+
+ hvm_get_segment_register(v, x86_seg_ss, &seg);
+ ctxt.ss_sel = seg.sel;
+ ctxt.ss_limit = seg.limit;
+ ctxt.ss_base = seg.base;
+ ctxt.ss_arbytes = seg.attr;
+
+ hvm_get_segment_register(v, x86_seg_fs, &seg);
+ ctxt.fs_sel = seg.sel;
+ ctxt.fs_limit = seg.limit;
+ ctxt.fs_base = seg.base;
+ ctxt.fs_arbytes = seg.attr;
+
+ hvm_get_segment_register(v, x86_seg_gs, &seg);
+ ctxt.gs_sel = seg.sel;
+ ctxt.gs_limit = seg.limit;
+ ctxt.gs_base = seg.base;
+ ctxt.gs_arbytes = seg.attr;
+
+ hvm_get_segment_register(v, x86_seg_tr, &seg);
+ ctxt.tr_sel = seg.sel;
+ ctxt.tr_limit = seg.limit;
+ ctxt.tr_base = seg.base;
+ ctxt.tr_arbytes = seg.attr;
+
+ hvm_get_segment_register(v, x86_seg_ldtr, &seg);
+ ctxt.ldtr_sel = seg.sel;
+ ctxt.ldtr_limit = seg.limit;
+ ctxt.ldtr_base = seg.base;
+ ctxt.ldtr_arbytes = seg.attr;
+
+ if ( v->fpu_initialised )
+ {
+ memcpy(ctxt.fpu_regs, v->arch.fpu_ctxt, sizeof(ctxt.fpu_regs));
+ ctxt.flags = XEN_X86_FPU_INITIALISED;
+ }
+
+ ctxt.rax = v->arch.user_regs.rax;
+ ctxt.rbx = v->arch.user_regs.rbx;
+ ctxt.rcx = v->arch.user_regs.rcx;
+ ctxt.rdx = v->arch.user_regs.rdx;
+ ctxt.rbp = v->arch.user_regs.rbp;
+ ctxt.rsi = v->arch.user_regs.rsi;
+ ctxt.rdi = v->arch.user_regs.rdi;
+ ctxt.rsp = v->arch.user_regs.rsp;
+ ctxt.rip = v->arch.user_regs.rip;
+ ctxt.rflags = v->arch.user_regs.rflags;
+ ctxt.r8 = v->arch.user_regs.r8;
+ ctxt.r9 = v->arch.user_regs.r9;
+ ctxt.r10 = v->arch.user_regs.r10;
+ ctxt.r11 = v->arch.user_regs.r11;
+ ctxt.r12 = v->arch.user_regs.r12;
+ ctxt.r13 = v->arch.user_regs.r13;
+ ctxt.r14 = v->arch.user_regs.r14;
+ ctxt.r15 = v->arch.user_regs.r15;
+ ctxt.dr0 = v->arch.debugreg[0];
+ ctxt.dr1 = v->arch.debugreg[1];
+ ctxt.dr2 = v->arch.debugreg[2];
+ ctxt.dr3 = v->arch.debugreg[3];
+ ctxt.dr6 = v->arch.debugreg[6];
+ ctxt.dr7 = v->arch.debugreg[7];
+
+ return hvm_save_entry(CPU, v->vcpu_id, h, &ctxt);
+}
+
static int hvm_save_cpu_ctxt(struct domain *d, hvm_domain_context_t *h)
{
struct vcpu *v;
- struct hvm_hw_cpu ctxt;
- struct segment_register seg;
+ int err = 0;
for_each_vcpu ( d, v )
{
- /* We don't need to save state for a vcpu that is down; the restore
- * code will leave it down if there is nothing saved. */
+ /*
+ * We don't need to save state for a vcpu that is down; the restore
+ * code will leave it down if there is nothing saved.
+ */
if ( v->pause_flags & VPF_down )
continue;
- memset(&ctxt, 0, sizeof(ctxt));
-
- /* Architecture-specific vmcs/vmcb bits */
- hvm_funcs.save_cpu_ctxt(v, &ctxt);
-
- ctxt.tsc = hvm_get_guest_tsc_fixed(v, d->arch.hvm_domain.sync_tsc);
-
- ctxt.msr_tsc_aux = hvm_msr_tsc_aux(v);
-
- hvm_get_segment_register(v, x86_seg_idtr, &seg);
- ctxt.idtr_limit = seg.limit;
- ctxt.idtr_base = seg.base;
-
- hvm_get_segment_register(v, x86_seg_gdtr, &seg);
- ctxt.gdtr_limit = seg.limit;
- ctxt.gdtr_base = seg.base;
-
- hvm_get_segment_register(v, x86_seg_cs, &seg);
- ctxt.cs_sel = seg.sel;
- ctxt.cs_limit = seg.limit;
- ctxt.cs_base = seg.base;
- ctxt.cs_arbytes = seg.attr;
-
- hvm_get_segment_register(v, x86_seg_ds, &seg);
- ctxt.ds_sel = seg.sel;
- ctxt.ds_limit = seg.limit;
- ctxt.ds_base = seg.base;
- ctxt.ds_arbytes = seg.attr;
-
- hvm_get_segment_register(v, x86_seg_es, &seg);
- ctxt.es_sel = seg.sel;
- ctxt.es_limit = seg.limit;
- ctxt.es_base = seg.base;
- ctxt.es_arbytes = seg.attr;
-
- hvm_get_segment_register(v, x86_seg_ss, &seg);
- ctxt.ss_sel = seg.sel;
- ctxt.ss_limit = seg.limit;
- ctxt.ss_base = seg.base;
- ctxt.ss_arbytes = seg.attr;
-
- hvm_get_segment_register(v, x86_seg_fs, &seg);
- ctxt.fs_sel = seg.sel;
- ctxt.fs_limit = seg.limit;
- ctxt.fs_base = seg.base;
- ctxt.fs_arbytes = seg.attr;
-
- hvm_get_segment_register(v, x86_seg_gs, &seg);
- ctxt.gs_sel = seg.sel;
- ctxt.gs_limit = seg.limit;
- ctxt.gs_base = seg.base;
- ctxt.gs_arbytes = seg.attr;
-
- hvm_get_segment_register(v, x86_seg_tr, &seg);
- ctxt.tr_sel = seg.sel;
- ctxt.tr_limit = seg.limit;
- ctxt.tr_base = seg.base;
- ctxt.tr_arbytes = seg.attr;
-
- hvm_get_segment_register(v, x86_seg_ldtr, &seg);
- ctxt.ldtr_sel = seg.sel;
- ctxt.ldtr_limit = seg.limit;
- ctxt.ldtr_base = seg.base;
- ctxt.ldtr_arbytes = seg.attr;
-
- if ( v->fpu_initialised )
- {
- memcpy(ctxt.fpu_regs, v->arch.fpu_ctxt, sizeof(ctxt.fpu_regs));
- ctxt.flags = XEN_X86_FPU_INITIALISED;
- }
-
- ctxt.rax = v->arch.user_regs.rax;
- ctxt.rbx = v->arch.user_regs.rbx;
- ctxt.rcx = v->arch.user_regs.rcx;
- ctxt.rdx = v->arch.user_regs.rdx;
- ctxt.rbp = v->arch.user_regs.rbp;
- ctxt.rsi = v->arch.user_regs.rsi;
- ctxt.rdi = v->arch.user_regs.rdi;
- ctxt.rsp = v->arch.user_regs.rsp;
- ctxt.rip = v->arch.user_regs.rip;
- ctxt.rflags = v->arch.user_regs.rflags;
- ctxt.r8 = v->arch.user_regs.r8;
- ctxt.r9 = v->arch.user_regs.r9;
- ctxt.r10 = v->arch.user_regs.r10;
- ctxt.r11 = v->arch.user_regs.r11;
- ctxt.r12 = v->arch.user_regs.r12;
- ctxt.r13 = v->arch.user_regs.r13;
- ctxt.r14 = v->arch.user_regs.r14;
- ctxt.r15 = v->arch.user_regs.r15;
- ctxt.dr0 = v->arch.debugreg[0];
- ctxt.dr1 = v->arch.debugreg[1];
- ctxt.dr2 = v->arch.debugreg[2];
- ctxt.dr3 = v->arch.debugreg[3];
- ctxt.dr6 = v->arch.debugreg[6];
- ctxt.dr7 = v->arch.debugreg[7];
-
- if ( hvm_save_entry(CPU, v->vcpu_id, h, &ctxt) != 0 )
- return 1;
+ err = hvm_save_cpu_ctxt_one(v, h);
+ if ( err )
+ break;
}
- return 0;
+ return err;
}
/* Return a string indicating the error, or NULL for valid. */
--
2.7.4
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
^ permalink raw reply related [flat|nested] 17+ messages in thread
* [PATCH v12 04/11] x86/hvm: Introduce hvm_save_cpu_xsave_states_one
2018-07-16 14:55 [PATCH v12 00/11] x86/domctl: Save info for one vcpu instance Alexandru Isaila
` (2 preceding siblings ...)
2018-07-16 14:55 ` [PATCH v12 03/11] x86/hvm: Introduce hvm_save_cpu_ctxt_one func Alexandru Isaila
@ 2018-07-16 14:55 ` Alexandru Isaila
2018-07-16 14:55 ` [PATCH v12 05/11] x86/hvm: Introduce hvm_save_cpu_msrs_one func Alexandru Isaila
` (6 subsequent siblings)
10 siblings, 0 replies; 17+ messages in thread
From: Alexandru Isaila @ 2018-07-16 14:55 UTC (permalink / raw)
To: xen-devel
Cc: wei.liu2, andrew.cooper3, ian.jackson, paul.durrant, jbeulich,
Alexandru Isaila
This is used to save data from a single instance.
Signed-off-by: Alexandru Isaila <aisaila@bitdefender.com>
---
Changes since V11:
- hvm_save_cpu_xsave_states_one() returns the err from
_hvm_init_entry().
- hvm_save_cpu_xsave_states() returns err from
hvm_save_cpu_xsave_states_one();
---
xen/arch/x86/hvm/hvm.c | 42 +++++++++++++++++++++++++++---------------
1 file changed, 27 insertions(+), 15 deletions(-)
diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index e20a25c..1da6a26 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -1190,33 +1190,45 @@ HVM_REGISTER_SAVE_RESTORE(CPU, hvm_save_cpu_ctxt, hvm_load_cpu_ctxt,
save_area) + \
xstate_ctxt_size(xcr0))
-static int hvm_save_cpu_xsave_states(struct domain *d, hvm_domain_context_t *h)
+static int hvm_save_cpu_xsave_states_one(struct vcpu *v, hvm_domain_context_t *h)
{
- struct vcpu *v;
struct hvm_hw_cpu_xsave *ctxt;
+ unsigned int size = HVM_CPU_XSAVE_SIZE(v->arch.xcr0_accum);
+ int err = 0;
if ( !cpu_has_xsave )
return 0; /* do nothing */
+ err = _hvm_init_entry(h, CPU_XSAVE_CODE, v->vcpu_id, size);
+ if ( err )
+ return err;
+
+ ctxt = (struct hvm_hw_cpu_xsave *)&h->data[h->cur];
+ h->cur += size;
+ ctxt->xfeature_mask = xfeature_mask;
+ ctxt->xcr0 = v->arch.xcr0;
+ ctxt->xcr0_accum = v->arch.xcr0_accum;
+
+ expand_xsave_states(v, &ctxt->save_area,
+ size - offsetof(typeof(*ctxt), save_area));
+ return 0;
+}
+
+static int hvm_save_cpu_xsave_states(struct domain *d, hvm_domain_context_t *h)
+{
+ struct vcpu *v;
+ int err = 0;
+
for_each_vcpu ( d, v )
{
- unsigned int size = HVM_CPU_XSAVE_SIZE(v->arch.xcr0_accum);
-
if ( !xsave_enabled(v) )
continue;
- if ( _hvm_init_entry(h, CPU_XSAVE_CODE, v->vcpu_id, size) )
- return 1;
- ctxt = (struct hvm_hw_cpu_xsave *)&h->data[h->cur];
- h->cur += size;
-
- ctxt->xfeature_mask = xfeature_mask;
- ctxt->xcr0 = v->arch.xcr0;
- ctxt->xcr0_accum = v->arch.xcr0_accum;
- expand_xsave_states(v, &ctxt->save_area,
- size - offsetof(typeof(*ctxt), save_area));
+ err = hvm_save_cpu_xsave_states_one(v, h);
+ if ( err )
+ break;
}
- return 0;
+ return err;
}
/*
--
2.7.4
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
^ permalink raw reply related [flat|nested] 17+ messages in thread
* [PATCH v12 05/11] x86/hvm: Introduce hvm_save_cpu_msrs_one func
2018-07-16 14:55 [PATCH v12 00/11] x86/domctl: Save info for one vcpu instance Alexandru Isaila
` (3 preceding siblings ...)
2018-07-16 14:55 ` [PATCH v12 04/11] x86/hvm: Introduce hvm_save_cpu_xsave_states_one Alexandru Isaila
@ 2018-07-16 14:55 ` Alexandru Isaila
2018-07-16 14:55 ` [PATCH v12 06/11] x86/hvm: Introduce hvm_save_mtrr_msr_one func Alexandru Isaila
` (5 subsequent siblings)
10 siblings, 0 replies; 17+ messages in thread
From: Alexandru Isaila @ 2018-07-16 14:55 UTC (permalink / raw)
To: xen-devel
Cc: wei.liu2, andrew.cooper3, ian.jackson, paul.durrant, jbeulich,
Alexandru Isaila
This is used to save data from a single instance.
Signed-off-by: Alexandru Isaila <aisaila@bitdefender.com>
Reviewed-by: Paul Durrant <paul.durrant@citrix.com>
---
Changes since V11:
- hvm_save_cpu_msrs() returns err from hvm_save_cpu_msrs_one().
---
xen/arch/x86/hvm/hvm.c | 105 +++++++++++++++++++++++++++----------------------
1 file changed, 58 insertions(+), 47 deletions(-)
diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 1da6a26..70d1b20 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -1368,69 +1368,80 @@ static const uint32_t msrs_to_send[] = {
};
static unsigned int __read_mostly msr_count_max = ARRAY_SIZE(msrs_to_send);
-static int hvm_save_cpu_msrs(struct domain *d, hvm_domain_context_t *h)
+static int hvm_save_cpu_msrs_one(struct vcpu *v, hvm_domain_context_t *h)
{
- struct vcpu *v;
+ struct hvm_save_descriptor *desc = _p(&h->data[h->cur]);
+ struct hvm_msr *ctxt;
+ unsigned int i;
+ int err = 0;
- for_each_vcpu ( d, v )
+ err = _hvm_init_entry(h, CPU_MSR_CODE, v->vcpu_id,
+ HVM_CPU_MSR_SIZE(msr_count_max));
+ if ( err )
+ return err;
+ ctxt = (struct hvm_msr *)&h->data[h->cur];
+ ctxt->count = 0;
+
+ for ( i = 0; i < ARRAY_SIZE(msrs_to_send); ++i )
{
- struct hvm_save_descriptor *desc = _p(&h->data[h->cur]);
- struct hvm_msr *ctxt;
- unsigned int i;
+ uint64_t val;
+ int rc = guest_rdmsr(v, msrs_to_send[i], &val);
- if ( _hvm_init_entry(h, CPU_MSR_CODE, v->vcpu_id,
- HVM_CPU_MSR_SIZE(msr_count_max)) )
- return 1;
- ctxt = (struct hvm_msr *)&h->data[h->cur];
- ctxt->count = 0;
+ /*
+ * It is the programmers responsibility to ensure that
+ * msrs_to_send[] contain generally-read/write MSRs.
+ * X86EMUL_EXCEPTION here implies a missing feature, and that the
+ * guest doesn't have access to the MSR.
+ */
+ if ( rc == X86EMUL_EXCEPTION )
+ continue;
- for ( i = 0; i < ARRAY_SIZE(msrs_to_send); ++i )
+ if ( rc != X86EMUL_OKAY )
{
- uint64_t val;
- int rc = guest_rdmsr(v, msrs_to_send[i], &val);
+ ASSERT_UNREACHABLE();
+ return -ENXIO;
+ }
- /*
- * It is the programmers responsibility to ensure that
- * msrs_to_send[] contain generally-read/write MSRs.
- * X86EMUL_EXCEPTION here implies a missing feature, and that the
- * guest doesn't have access to the MSR.
- */
- if ( rc == X86EMUL_EXCEPTION )
- continue;
+ if ( !val )
+ continue; /* Skip empty MSRs. */
- if ( rc != X86EMUL_OKAY )
- {
- ASSERT_UNREACHABLE();
- return -ENXIO;
- }
+ ctxt->msr[ctxt->count].index = msrs_to_send[i];
+ ctxt->msr[ctxt->count++].val = val;
+ }
- if ( !val )
- continue; /* Skip empty MSRs. */
+ if ( hvm_funcs.save_msr )
+ hvm_funcs.save_msr(v, ctxt);
- ctxt->msr[ctxt->count].index = msrs_to_send[i];
- ctxt->msr[ctxt->count++].val = val;
- }
+ ASSERT(ctxt->count <= msr_count_max);
- if ( hvm_funcs.save_msr )
- hvm_funcs.save_msr(v, ctxt);
+ for ( i = 0; i < ctxt->count; ++i )
+ ctxt->msr[i]._rsvd = 0;
- ASSERT(ctxt->count <= msr_count_max);
+ if ( ctxt->count )
+ {
+ /* Rewrite length to indicate how much space we actually used. */
+ desc->length = HVM_CPU_MSR_SIZE(ctxt->count);
+ h->cur += HVM_CPU_MSR_SIZE(ctxt->count);
+ }
+ else
+ /* or rewind and remove the descriptor from the stream. */
+ h->cur -= sizeof(struct hvm_save_descriptor);
+ return 0;
+}
- for ( i = 0; i < ctxt->count; ++i )
- ctxt->msr[i]._rsvd = 0;
+static int hvm_save_cpu_msrs(struct domain *d, hvm_domain_context_t *h)
+{
+ struct vcpu *v;
+ int err = 0;
- if ( ctxt->count )
- {
- /* Rewrite length to indicate how much space we actually used. */
- desc->length = HVM_CPU_MSR_SIZE(ctxt->count);
- h->cur += HVM_CPU_MSR_SIZE(ctxt->count);
- }
- else
- /* or rewind and remove the descriptor from the stream. */
- h->cur -= sizeof(struct hvm_save_descriptor);
+ for_each_vcpu ( d, v )
+ {
+ err = hvm_save_cpu_msrs_one(v, h);
+ if ( err )
+ break;
}
- return 0;
+ return err;
}
static int hvm_load_cpu_msrs(struct domain *d, hvm_domain_context_t *h)
--
2.7.4
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
^ permalink raw reply related [flat|nested] 17+ messages in thread
* [PATCH v12 06/11] x86/hvm: Introduce hvm_save_mtrr_msr_one func
2018-07-16 14:55 [PATCH v12 00/11] x86/domctl: Save info for one vcpu instance Alexandru Isaila
` (4 preceding siblings ...)
2018-07-16 14:55 ` [PATCH v12 05/11] x86/hvm: Introduce hvm_save_cpu_msrs_one func Alexandru Isaila
@ 2018-07-16 14:55 ` Alexandru Isaila
2018-07-16 14:55 ` [PATCH v12 07/11] x86/hvm: Introduce viridian_save_vcpu_ctxt_one() func Alexandru Isaila
` (4 subsequent siblings)
10 siblings, 0 replies; 17+ messages in thread
From: Alexandru Isaila @ 2018-07-16 14:55 UTC (permalink / raw)
To: xen-devel
Cc: wei.liu2, andrew.cooper3, ian.jackson, paul.durrant, jbeulich,
Alexandru Isaila
This is used to save data from a single instance.
Signed-off-by: Alexandru Isaila <aisaila@bitdefender.com>
---
Changes since v11:
- hvm_save_mtrr_msr() now returns err from hvm_save_mtrr_msr_one().
Note: This patch is based on Roger Pau Monne's series[1]
---
xen/arch/x86/hvm/mtrr.c | 81 +++++++++++++++++++++++++++----------------------
1 file changed, 44 insertions(+), 37 deletions(-)
diff --git a/xen/arch/x86/hvm/mtrr.c b/xen/arch/x86/hvm/mtrr.c
index 48facbb..47a5c29 100644
--- a/xen/arch/x86/hvm/mtrr.c
+++ b/xen/arch/x86/hvm/mtrr.c
@@ -718,52 +718,59 @@ int hvm_set_mem_pinned_cacheattr(struct domain *d, uint64_t gfn_start,
return 0;
}
-static int hvm_save_mtrr_msr(struct domain *d, hvm_domain_context_t *h)
+static int hvm_save_mtrr_msr_one(struct vcpu *v, hvm_domain_context_t *h)
{
- struct vcpu *v;
+ const struct mtrr_state *mtrr_state = &v->arch.hvm_vcpu.mtrr;
+ struct hvm_hw_mtrr hw_mtrr = {
+ .msr_mtrr_def_type = mtrr_state->def_type |
+ MASK_INSR(mtrr_state->fixed_enabled,
+ MTRRdefType_FE) |
+ MASK_INSR(mtrr_state->enabled, MTRRdefType_E),
+ .msr_mtrr_cap = mtrr_state->mtrr_cap,
+ };
+ unsigned int i;
- /* save mtrr&pat */
- for_each_vcpu(d, v)
+ if ( MASK_EXTR(hw_mtrr.msr_mtrr_cap, MTRRcap_VCNT) >
+ (ARRAY_SIZE(hw_mtrr.msr_mtrr_var) / 2) )
{
- const struct mtrr_state *mtrr_state = &v->arch.hvm_vcpu.mtrr;
- struct hvm_hw_mtrr hw_mtrr = {
- .msr_mtrr_def_type = mtrr_state->def_type |
- MASK_INSR(mtrr_state->fixed_enabled,
- MTRRdefType_FE) |
- MASK_INSR(mtrr_state->enabled, MTRRdefType_E),
- .msr_mtrr_cap = mtrr_state->mtrr_cap,
- };
- unsigned int i;
+ dprintk(XENLOG_G_ERR,
+ "HVM save: %pv: too many (%lu) variable range MTRRs\n",
+ v, MASK_EXTR(hw_mtrr.msr_mtrr_cap, MTRRcap_VCNT));
+ return -EINVAL;
+ }
- if ( MASK_EXTR(hw_mtrr.msr_mtrr_cap, MTRRcap_VCNT) >
- (ARRAY_SIZE(hw_mtrr.msr_mtrr_var) / 2) )
- {
- dprintk(XENLOG_G_ERR,
- "HVM save: %pv: too many (%lu) variable range MTRRs\n",
- v, MASK_EXTR(hw_mtrr.msr_mtrr_cap, MTRRcap_VCNT));
- return -EINVAL;
- }
+ hvm_get_guest_pat(v, &hw_mtrr.msr_pat_cr);
- hvm_get_guest_pat(v, &hw_mtrr.msr_pat_cr);
+ for ( i = 0; i < MASK_EXTR(hw_mtrr.msr_mtrr_cap, MTRRcap_VCNT); i++ )
+ {
+ /* save physbase */
+ hw_mtrr.msr_mtrr_var[i*2] =
+ ((uint64_t*)mtrr_state->var_ranges)[i*2];
+ /* save physmask */
+ hw_mtrr.msr_mtrr_var[i*2+1] =
+ ((uint64_t*)mtrr_state->var_ranges)[i*2+1];
+ }
- for ( i = 0; i < MASK_EXTR(hw_mtrr.msr_mtrr_cap, MTRRcap_VCNT); i++ )
- {
- /* save physbase */
- hw_mtrr.msr_mtrr_var[i*2] =
- ((uint64_t*)mtrr_state->var_ranges)[i*2];
- /* save physmask */
- hw_mtrr.msr_mtrr_var[i*2+1] =
- ((uint64_t*)mtrr_state->var_ranges)[i*2+1];
- }
+ for ( i = 0; i < NUM_FIXED_MSR; i++ )
+ hw_mtrr.msr_mtrr_fixed[i] =
+ ((uint64_t*)mtrr_state->fixed_ranges)[i];
- for ( i = 0; i < NUM_FIXED_MSR; i++ )
- hw_mtrr.msr_mtrr_fixed[i] =
- ((uint64_t*)mtrr_state->fixed_ranges)[i];
+ return hvm_save_entry(MTRR, v->vcpu_id, h, &hw_mtrr);
+}
- if ( hvm_save_entry(MTRR, v->vcpu_id, h, &hw_mtrr) != 0 )
- return 1;
+static int hvm_save_mtrr_msr(struct domain *d, hvm_domain_context_t *h)
+{
+ struct vcpu *v;
+ int err = 0;
+
+ /* save mtrr&pat */
+ for_each_vcpu(d, v)
+ {
+ err = hvm_save_mtrr_msr_one(v, h);
+ if ( err )
+ break;
}
- return 0;
+ return err;
}
static int hvm_load_mtrr_msr(struct domain *d, hvm_domain_context_t *h)
--
2.7.4
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
^ permalink raw reply related [flat|nested] 17+ messages in thread
* [PATCH v12 07/11] x86/hvm: Introduce viridian_save_vcpu_ctxt_one() func
2018-07-16 14:55 [PATCH v12 00/11] x86/domctl: Save info for one vcpu instance Alexandru Isaila
` (5 preceding siblings ...)
2018-07-16 14:55 ` [PATCH v12 06/11] x86/hvm: Introduce hvm_save_mtrr_msr_one func Alexandru Isaila
@ 2018-07-16 14:55 ` Alexandru Isaila
2018-07-16 15:34 ` Paul Durrant
2018-07-16 14:55 ` [PATCH v12 08/11] x86/hvm: Add handler for save_one funcs Alexandru Isaila
` (3 subsequent siblings)
10 siblings, 1 reply; 17+ messages in thread
From: Alexandru Isaila @ 2018-07-16 14:55 UTC (permalink / raw)
To: xen-devel
Cc: wei.liu2, andrew.cooper3, ian.jackson, paul.durrant, jbeulich,
Alexandru Isaila
This is used to save data from a single instance.
Signed-off-by: Alexandru Isaila <aisaila@bitdefender.com>
---
xen/arch/x86/hvm/viridian.c | 29 +++++++++++++++++++----------
1 file changed, 19 insertions(+), 10 deletions(-)
diff --git a/xen/arch/x86/hvm/viridian.c b/xen/arch/x86/hvm/viridian.c
index 694eae6..f3430bb 100644
--- a/xen/arch/x86/hvm/viridian.c
+++ b/xen/arch/x86/hvm/viridian.c
@@ -1026,24 +1026,33 @@ static int viridian_load_domain_ctxt(struct domain *d, hvm_domain_context_t *h)
HVM_REGISTER_SAVE_RESTORE(VIRIDIAN_DOMAIN, viridian_save_domain_ctxt,
viridian_load_domain_ctxt, 1, HVMSR_PER_DOM);
-static int viridian_save_vcpu_ctxt(struct domain *d, hvm_domain_context_t *h)
+static int viridian_save_vcpu_ctxt_one(struct vcpu *v, hvm_domain_context_t *h)
{
- struct vcpu *v;
+ struct hvm_viridian_vcpu_context ctxt = {};
- if ( !is_viridian_domain(d) )
+ if ( !is_viridian_domain(v->domain) )
return 0;
+ ctxt.vp_assist_msr = v->arch.hvm_vcpu.viridian.vp_assist.msr.raw;
+ ctxt.vp_assist_pending = v->arch.hvm_vcpu.viridian.vp_assist.pending;
+
+ return hvm_save_entry(VIRIDIAN_VCPU, v->vcpu_id, h, &ctxt);
+}
+
+static int viridian_save_vcpu_ctxt(struct domain *d, hvm_domain_context_t *h)
+{
+ struct vcpu *v;
+ int err = 0;
+
for_each_vcpu( d, v ) {
- struct hvm_viridian_vcpu_context ctxt = {
- .vp_assist_msr = v->arch.hvm_vcpu.viridian.vp_assist.msr.raw,
- .vp_assist_pending = v->arch.hvm_vcpu.viridian.vp_assist.pending,
- };
- if ( hvm_save_entry(VIRIDIAN_VCPU, v->vcpu_id, h, &ctxt) != 0 )
- return 1;
+ err = viridian_save_vcpu_ctxt_one(v, h);
+
+ if ( err )
+ break;
}
- return 0;
+ return err;
}
static int viridian_load_vcpu_ctxt(struct domain *d, hvm_domain_context_t *h)
--
2.7.4
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
^ permalink raw reply related [flat|nested] 17+ messages in thread
* [PATCH v12 08/11] x86/hvm: Add handler for save_one funcs
2018-07-16 14:55 [PATCH v12 00/11] x86/domctl: Save info for one vcpu instance Alexandru Isaila
` (6 preceding siblings ...)
2018-07-16 14:55 ` [PATCH v12 07/11] x86/hvm: Introduce viridian_save_vcpu_ctxt_one() func Alexandru Isaila
@ 2018-07-16 14:55 ` Alexandru Isaila
2018-07-16 14:55 ` [PATCH v12 09/11] x86/domctl: Don't pause the whole domain if only getting vcpu state Alexandru Isaila
` (2 subsequent siblings)
10 siblings, 0 replies; 17+ messages in thread
From: Alexandru Isaila @ 2018-07-16 14:55 UTC (permalink / raw)
To: xen-devel
Cc: wei.liu2, andrew.cooper3, ian.jackson, paul.durrant, jbeulich,
Alexandru Isaila
Signed-off-by: Alexandru Isaila <aisaila@bitdefender.com>
---
Changes since V8:
- Add comment for the handler return values.
---
xen/arch/x86/cpu/mcheck/vmce.c | 1 +
xen/arch/x86/hvm/hpet.c | 2 +-
xen/arch/x86/hvm/hvm.c | 6 +++++-
xen/arch/x86/hvm/i8254.c | 2 +-
xen/arch/x86/hvm/irq.c | 6 +++---
xen/arch/x86/hvm/mtrr.c | 4 ++--
xen/arch/x86/hvm/pmtimer.c | 2 +-
xen/arch/x86/hvm/rtc.c | 2 +-
xen/arch/x86/hvm/save.c | 5 ++++-
xen/arch/x86/hvm/vioapic.c | 2 +-
xen/arch/x86/hvm/viridian.c | 3 ++-
xen/arch/x86/hvm/vlapic.c | 4 ++--
xen/arch/x86/hvm/vpic.c | 2 +-
xen/include/asm-x86/hvm/save.h | 6 +++++-
14 files changed, 30 insertions(+), 17 deletions(-)
diff --git a/xen/arch/x86/cpu/mcheck/vmce.c b/xen/arch/x86/cpu/mcheck/vmce.c
index 31e553c..35044d7 100644
--- a/xen/arch/x86/cpu/mcheck/vmce.c
+++ b/xen/arch/x86/cpu/mcheck/vmce.c
@@ -396,6 +396,7 @@ static int vmce_load_vcpu_ctxt(struct domain *d, hvm_domain_context_t *h)
}
HVM_REGISTER_SAVE_RESTORE(VMCE_VCPU, vmce_save_vcpu_ctxt,
+ vmce_save_vcpu_ctxt_one,
vmce_load_vcpu_ctxt, 1, HVMSR_PER_VCPU);
/*
diff --git a/xen/arch/x86/hvm/hpet.c b/xen/arch/x86/hvm/hpet.c
index 2837709..aff8613 100644
--- a/xen/arch/x86/hvm/hpet.c
+++ b/xen/arch/x86/hvm/hpet.c
@@ -640,7 +640,7 @@ static int hpet_load(struct domain *d, hvm_domain_context_t *h)
return 0;
}
-HVM_REGISTER_SAVE_RESTORE(HPET, hpet_save, hpet_load, 1, HVMSR_PER_DOM);
+HVM_REGISTER_SAVE_RESTORE(HPET, hpet_save, NULL, hpet_load, 1, HVMSR_PER_DOM);
static void hpet_set(HPETState *h)
{
diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 70d1b20..c458dab 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -785,6 +785,7 @@ static int hvm_load_tsc_adjust(struct domain *d, hvm_domain_context_t *h)
}
HVM_REGISTER_SAVE_RESTORE(TSC_ADJUST, hvm_save_tsc_adjust,
+ hvm_save_tsc_adjust_one,
hvm_load_tsc_adjust, 1, HVMSR_PER_VCPU);
static int hvm_save_cpu_ctxt_one(struct vcpu *v, hvm_domain_context_t *h)
@@ -1183,7 +1184,8 @@ static int hvm_load_cpu_ctxt(struct domain *d, hvm_domain_context_t *h)
return 0;
}
-HVM_REGISTER_SAVE_RESTORE(CPU, hvm_save_cpu_ctxt, hvm_load_cpu_ctxt,
+HVM_REGISTER_SAVE_RESTORE(CPU, hvm_save_cpu_ctxt, hvm_save_cpu_ctxt_one,
+ hvm_load_cpu_ctxt,
1, HVMSR_PER_VCPU);
#define HVM_CPU_XSAVE_SIZE(xcr0) (offsetof(struct hvm_hw_cpu_xsave, \
@@ -1536,6 +1538,7 @@ static int __init hvm_register_CPU_save_and_restore(void)
hvm_register_savevm(CPU_XSAVE_CODE,
"CPU_XSAVE",
hvm_save_cpu_xsave_states,
+ hvm_save_cpu_xsave_states_one,
hvm_load_cpu_xsave_states,
HVM_CPU_XSAVE_SIZE(xfeature_mask) +
sizeof(struct hvm_save_descriptor),
@@ -1548,6 +1551,7 @@ static int __init hvm_register_CPU_save_and_restore(void)
hvm_register_savevm(CPU_MSR_CODE,
"CPU_MSR",
hvm_save_cpu_msrs,
+ hvm_save_cpu_msrs_one,
hvm_load_cpu_msrs,
HVM_CPU_MSR_SIZE(msr_count_max) +
sizeof(struct hvm_save_descriptor),
diff --git a/xen/arch/x86/hvm/i8254.c b/xen/arch/x86/hvm/i8254.c
index 992f08d..ec77b23 100644
--- a/xen/arch/x86/hvm/i8254.c
+++ b/xen/arch/x86/hvm/i8254.c
@@ -437,7 +437,7 @@ static int pit_load(struct domain *d, hvm_domain_context_t *h)
return 0;
}
-HVM_REGISTER_SAVE_RESTORE(PIT, pit_save, pit_load, 1, HVMSR_PER_DOM);
+HVM_REGISTER_SAVE_RESTORE(PIT, pit_save, NULL, pit_load, 1, HVMSR_PER_DOM);
void pit_reset(struct domain *d)
{
diff --git a/xen/arch/x86/hvm/irq.c b/xen/arch/x86/hvm/irq.c
index c85d004..770eab7 100644
--- a/xen/arch/x86/hvm/irq.c
+++ b/xen/arch/x86/hvm/irq.c
@@ -764,9 +764,9 @@ static int irq_load_link(struct domain *d, hvm_domain_context_t *h)
return 0;
}
-HVM_REGISTER_SAVE_RESTORE(PCI_IRQ, irq_save_pci, irq_load_pci,
+HVM_REGISTER_SAVE_RESTORE(PCI_IRQ, irq_save_pci, NULL, irq_load_pci,
1, HVMSR_PER_DOM);
-HVM_REGISTER_SAVE_RESTORE(ISA_IRQ, irq_save_isa, irq_load_isa,
+HVM_REGISTER_SAVE_RESTORE(ISA_IRQ, irq_save_isa, NULL, irq_load_isa,
1, HVMSR_PER_DOM);
-HVM_REGISTER_SAVE_RESTORE(PCI_LINK, irq_save_link, irq_load_link,
+HVM_REGISTER_SAVE_RESTORE(PCI_LINK, irq_save_link, NULL, irq_load_link,
1, HVMSR_PER_DOM);
diff --git a/xen/arch/x86/hvm/mtrr.c b/xen/arch/x86/hvm/mtrr.c
index 47a5c29..1cb2a2e 100644
--- a/xen/arch/x86/hvm/mtrr.c
+++ b/xen/arch/x86/hvm/mtrr.c
@@ -823,8 +823,8 @@ static int hvm_load_mtrr_msr(struct domain *d, hvm_domain_context_t *h)
return 0;
}
-HVM_REGISTER_SAVE_RESTORE(MTRR, hvm_save_mtrr_msr, hvm_load_mtrr_msr,
- 1, HVMSR_PER_VCPU);
+HVM_REGISTER_SAVE_RESTORE(MTRR, hvm_save_mtrr_msr, hvm_save_mtrr_msr_one,
+ hvm_load_mtrr_msr, 1, HVMSR_PER_VCPU);
void memory_type_changed(struct domain *d)
{
diff --git a/xen/arch/x86/hvm/pmtimer.c b/xen/arch/x86/hvm/pmtimer.c
index 435647f..0a5e8ce 100644
--- a/xen/arch/x86/hvm/pmtimer.c
+++ b/xen/arch/x86/hvm/pmtimer.c
@@ -309,7 +309,7 @@ static int acpi_load(struct domain *d, hvm_domain_context_t *h)
return 0;
}
-HVM_REGISTER_SAVE_RESTORE(PMTIMER, acpi_save, acpi_load,
+HVM_REGISTER_SAVE_RESTORE(PMTIMER, acpi_save, NULL, acpi_load,
1, HVMSR_PER_DOM);
int pmtimer_change_ioport(struct domain *d, unsigned int version)
diff --git a/xen/arch/x86/hvm/rtc.c b/xen/arch/x86/hvm/rtc.c
index cb75b99..ce7e71b 100644
--- a/xen/arch/x86/hvm/rtc.c
+++ b/xen/arch/x86/hvm/rtc.c
@@ -783,7 +783,7 @@ static int rtc_load(struct domain *d, hvm_domain_context_t *h)
return 0;
}
-HVM_REGISTER_SAVE_RESTORE(RTC, rtc_save, rtc_load, 1, HVMSR_PER_DOM);
+HVM_REGISTER_SAVE_RESTORE(RTC, rtc_save, NULL, rtc_load, 1, HVMSR_PER_DOM);
void rtc_reset(struct domain *d)
{
diff --git a/xen/arch/x86/hvm/save.c b/xen/arch/x86/hvm/save.c
index 8984a23..b674937 100644
--- a/xen/arch/x86/hvm/save.c
+++ b/xen/arch/x86/hvm/save.c
@@ -85,16 +85,18 @@ int arch_hvm_load(struct domain *d, struct hvm_save_header *hdr)
/* List of handlers for various HVM save and restore types */
static struct {
hvm_save_handler save;
+ hvm_save_one_handler save_one;
hvm_load_handler load;
const char *name;
size_t size;
int kind;
-} hvm_sr_handlers[HVM_SAVE_CODE_MAX + 1] = { {NULL, NULL, "<?>"}, };
+} hvm_sr_handlers[HVM_SAVE_CODE_MAX + 1] = { {NULL, NULL, NULL, "<?>"}, };
/* Init-time function to add entries to that list */
void __init hvm_register_savevm(uint16_t typecode,
const char *name,
hvm_save_handler save_state,
+ hvm_save_one_handler save_one,
hvm_load_handler load_state,
size_t size, int kind)
{
@@ -102,6 +104,7 @@ void __init hvm_register_savevm(uint16_t typecode,
ASSERT(hvm_sr_handlers[typecode].save == NULL);
ASSERT(hvm_sr_handlers[typecode].load == NULL);
hvm_sr_handlers[typecode].save = save_state;
+ hvm_sr_handlers[typecode].save_one = save_one;
hvm_sr_handlers[typecode].load = load_state;
hvm_sr_handlers[typecode].name = name;
hvm_sr_handlers[typecode].size = size;
diff --git a/xen/arch/x86/hvm/vioapic.c b/xen/arch/x86/hvm/vioapic.c
index 97b419f..66f54e4 100644
--- a/xen/arch/x86/hvm/vioapic.c
+++ b/xen/arch/x86/hvm/vioapic.c
@@ -601,7 +601,7 @@ static int ioapic_load(struct domain *d, hvm_domain_context_t *h)
return hvm_load_entry(IOAPIC, h, &s->domU);
}
-HVM_REGISTER_SAVE_RESTORE(IOAPIC, ioapic_save, ioapic_load, 1, HVMSR_PER_DOM);
+HVM_REGISTER_SAVE_RESTORE(IOAPIC, ioapic_save, NULL, ioapic_load, 1, HVMSR_PER_DOM);
void vioapic_reset(struct domain *d)
{
diff --git a/xen/arch/x86/hvm/viridian.c b/xen/arch/x86/hvm/viridian.c
index f3430bb..1571336 100644
--- a/xen/arch/x86/hvm/viridian.c
+++ b/xen/arch/x86/hvm/viridian.c
@@ -1023,7 +1023,7 @@ static int viridian_load_domain_ctxt(struct domain *d, hvm_domain_context_t *h)
return 0;
}
-HVM_REGISTER_SAVE_RESTORE(VIRIDIAN_DOMAIN, viridian_save_domain_ctxt,
+HVM_REGISTER_SAVE_RESTORE(VIRIDIAN_DOMAIN, viridian_save_domain_ctxt, NULL,
viridian_load_domain_ctxt, 1, HVMSR_PER_DOM);
static int viridian_save_vcpu_ctxt_one(struct vcpu *v, hvm_domain_context_t *h)
@@ -1086,6 +1086,7 @@ static int viridian_load_vcpu_ctxt(struct domain *d, hvm_domain_context_t *h)
}
HVM_REGISTER_SAVE_RESTORE(VIRIDIAN_VCPU, viridian_save_vcpu_ctxt,
+ viridian_save_vcpu_ctxt_one,
viridian_load_vcpu_ctxt, 1, HVMSR_PER_VCPU);
static int __init parse_viridian_version(const char *arg)
diff --git a/xen/arch/x86/hvm/vlapic.c b/xen/arch/x86/hvm/vlapic.c
index 1b9f00a..eff6070 100644
--- a/xen/arch/x86/hvm/vlapic.c
+++ b/xen/arch/x86/hvm/vlapic.c
@@ -1576,9 +1576,9 @@ static int lapic_load_regs(struct domain *d, hvm_domain_context_t *h)
return 0;
}
-HVM_REGISTER_SAVE_RESTORE(LAPIC, lapic_save_hidden, lapic_load_hidden,
+HVM_REGISTER_SAVE_RESTORE(LAPIC, lapic_save_hidden, NULL, lapic_load_hidden,
1, HVMSR_PER_VCPU);
-HVM_REGISTER_SAVE_RESTORE(LAPIC_REGS, lapic_save_regs, lapic_load_regs,
+HVM_REGISTER_SAVE_RESTORE(LAPIC_REGS, lapic_save_regs, NULL, lapic_load_regs,
1, HVMSR_PER_VCPU);
int vlapic_init(struct vcpu *v)
diff --git a/xen/arch/x86/hvm/vpic.c b/xen/arch/x86/hvm/vpic.c
index e160bbd..ca9b4cb 100644
--- a/xen/arch/x86/hvm/vpic.c
+++ b/xen/arch/x86/hvm/vpic.c
@@ -411,7 +411,7 @@ static int vpic_load(struct domain *d, hvm_domain_context_t *h)
return 0;
}
-HVM_REGISTER_SAVE_RESTORE(PIC, vpic_save, vpic_load, 2, HVMSR_PER_DOM);
+HVM_REGISTER_SAVE_RESTORE(PIC, vpic_save, NULL, vpic_load, 2, HVMSR_PER_DOM);
void vpic_reset(struct domain *d)
{
diff --git a/xen/include/asm-x86/hvm/save.h b/xen/include/asm-x86/hvm/save.h
index f889e8f..2538628 100644
--- a/xen/include/asm-x86/hvm/save.h
+++ b/xen/include/asm-x86/hvm/save.h
@@ -97,6 +97,8 @@ static inline uint16_t hvm_load_instance(struct hvm_domain_context *h)
* restoring. Both return non-zero on error. */
typedef int (*hvm_save_handler) (struct domain *d,
hvm_domain_context_t *h);
+typedef int (*hvm_save_one_handler)(struct vcpu *v,
+ hvm_domain_context_t *h);
typedef int (*hvm_load_handler) (struct domain *d,
hvm_domain_context_t *h);
@@ -105,6 +107,7 @@ typedef int (*hvm_load_handler) (struct domain *d,
void hvm_register_savevm(uint16_t typecode,
const char *name,
hvm_save_handler save_state,
+ hvm_save_one_handler save_one,
hvm_load_handler load_state,
size_t size, int kind);
@@ -114,12 +117,13 @@ void hvm_register_savevm(uint16_t typecode,
/* Syntactic sugar around that function: specify the max number of
* saves, and this calculates the size of buffer needed */
-#define HVM_REGISTER_SAVE_RESTORE(_x, _save, _load, _num, _k) \
+#define HVM_REGISTER_SAVE_RESTORE(_x, _save, _save_one, _load, _num, _k) \
static int __init __hvm_register_##_x##_save_and_restore(void) \
{ \
hvm_register_savevm(HVM_SAVE_CODE(_x), \
#_x, \
&_save, \
+ _save_one, \
&_load, \
(_num) * (HVM_SAVE_LENGTH(_x) \
+ sizeof (struct hvm_save_descriptor)), \
--
2.7.4
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
^ permalink raw reply related [flat|nested] 17+ messages in thread
* [PATCH v12 09/11] x86/domctl: Don't pause the whole domain if only getting vcpu state
2018-07-16 14:55 [PATCH v12 00/11] x86/domctl: Save info for one vcpu instance Alexandru Isaila
` (7 preceding siblings ...)
2018-07-16 14:55 ` [PATCH v12 08/11] x86/hvm: Add handler for save_one funcs Alexandru Isaila
@ 2018-07-16 14:55 ` Alexandru Isaila
2018-07-16 14:55 ` [PATCH v12 10/11] x86/hvm: Remove redundant save functions Alexandru Isaila
2018-07-16 14:55 ` [PATCH v12 11/11] x86/hvm: Remove save_one handler Alexandru Isaila
10 siblings, 0 replies; 17+ messages in thread
From: Alexandru Isaila @ 2018-07-16 14:55 UTC (permalink / raw)
To: xen-devel
Cc: wei.liu2, andrew.cooper3, ian.jackson, paul.durrant, jbeulich,
Alexandru Isaila
This patch is focused on moving the for loop to the caller so
now we can save info for a single vcpu instance with the save_one
handlers.
Signed-off-by: Alexandru Isaila <aisaila@bitdefender.com>
---
Changes since V11:
- Changed the CONTINUE return to return 0.
---
xen/arch/x86/hvm/hvm.c | 18 +++----
xen/arch/x86/hvm/save.c | 137 +++++++++++++++++++++++++++++++++++++-----------
2 files changed, 115 insertions(+), 40 deletions(-)
diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index c458dab..ac0b496 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -793,6 +793,13 @@ static int hvm_save_cpu_ctxt_one(struct vcpu *v, hvm_domain_context_t *h)
struct segment_register seg;
struct hvm_hw_cpu ctxt;
+ /*
+ * We don't need to save state for a vcpu that is down; the restore
+ * code will leave it down if there is nothing saved.
+ */
+ if ( v->pause_flags & VPF_down )
+ return 0;
+
memset(&ctxt, 0, sizeof(ctxt));
/* Architecture-specific vmcs/vmcb bits */
@@ -899,13 +906,6 @@ static int hvm_save_cpu_ctxt(struct domain *d, hvm_domain_context_t *h)
for_each_vcpu ( d, v )
{
- /*
- * We don't need to save state for a vcpu that is down; the restore
- * code will leave it down if there is nothing saved.
- */
- if ( v->pause_flags & VPF_down )
- continue;
-
err = hvm_save_cpu_ctxt_one(v, h);
if ( err )
break;
@@ -1198,7 +1198,7 @@ static int hvm_save_cpu_xsave_states_one(struct vcpu *v, hvm_domain_context_t *h
unsigned int size = HVM_CPU_XSAVE_SIZE(v->arch.xcr0_accum);
int err = 0;
- if ( !cpu_has_xsave )
+ if ( !cpu_has_xsave || !xsave_enabled(v) )
return 0; /* do nothing */
err = _hvm_init_entry(h, CPU_XSAVE_CODE, v->vcpu_id, size);
@@ -1223,8 +1223,6 @@ static int hvm_save_cpu_xsave_states(struct domain *d, hvm_domain_context_t *h)
for_each_vcpu ( d, v )
{
- if ( !xsave_enabled(v) )
- continue;
err = hvm_save_cpu_xsave_states_one(v, h);
if ( err )
break;
diff --git a/xen/arch/x86/hvm/save.c b/xen/arch/x86/hvm/save.c
index b674937..d57648d 100644
--- a/xen/arch/x86/hvm/save.c
+++ b/xen/arch/x86/hvm/save.c
@@ -138,9 +138,12 @@ size_t hvm_save_size(struct domain *d)
int hvm_save_one(struct domain *d, unsigned int typecode, unsigned int instance,
XEN_GUEST_HANDLE_64(uint8) handle, uint64_t *bufsz)
{
- int rv;
+ int rv = 0;
hvm_domain_context_t ctxt = { };
const struct hvm_save_descriptor *desc;
+ bool is_single_instance = false;
+ uint32_t off = 0;
+ struct vcpu *v;
if ( d->is_dying ||
typecode > HVM_SAVE_CODE_MAX ||
@@ -148,43 +151,94 @@ int hvm_save_one(struct domain *d, unsigned int typecode, unsigned int instance,
!hvm_sr_handlers[typecode].save )
return -EINVAL;
+ if ( hvm_sr_handlers[typecode].kind == HVMSR_PER_VCPU &&
+ instance < d->max_vcpus )
+ is_single_instance = true;
+
ctxt.size = hvm_sr_handlers[typecode].size;
- if ( hvm_sr_handlers[typecode].kind == HVMSR_PER_VCPU )
+ if ( hvm_sr_handlers[typecode].kind == HVMSR_PER_VCPU &&
+ instance == d->max_vcpus )
ctxt.size *= d->max_vcpus;
ctxt.data = xmalloc_bytes(ctxt.size);
if ( !ctxt.data )
return -ENOMEM;
- if ( (rv = hvm_sr_handlers[typecode].save(d, &ctxt)) != 0 )
- printk(XENLOG_G_ERR "HVM%d save: failed to save type %"PRIu16" (%d)\n",
- d->domain_id, typecode, rv);
- else if ( rv = -ENOENT, ctxt.cur >= sizeof(*desc) )
+ if ( is_single_instance )
+ vcpu_pause(d->vcpu[instance]);
+ else
+ domain_pause(d);
+
+ if ( is_single_instance )
{
- uint32_t off;
+ if ( hvm_sr_handlers[typecode].save_one != NULL )
+ rv = hvm_sr_handlers[typecode].save_one(d->vcpu[instance],
+ &ctxt);
+ else
+ rv = hvm_sr_handlers[typecode].save(d, &ctxt);
- for ( off = 0; off <= (ctxt.cur - sizeof(*desc)); off += desc->length )
+ if ( rv != 0 )
{
- desc = (void *)(ctxt.data + off);
- /* Move past header */
- off += sizeof(*desc);
- if ( ctxt.cur < desc->length ||
- off > ctxt.cur - desc->length )
- break;
- if ( instance == desc->instance )
- {
- rv = 0;
- if ( guest_handle_is_null(handle) )
- *bufsz = desc->length;
- else if ( *bufsz < desc->length )
- rv = -ENOBUFS;
- else if ( copy_to_guest(handle, ctxt.data + off, desc->length) )
- rv = -EFAULT;
- else
- *bufsz = desc->length;
- break;
- }
+ printk(XENLOG_G_ERR "HVM%d save: failed to save type %"PRIu16" (%d)\n",
+ d->domain_id, typecode, rv);
+ vcpu_unpause(d->vcpu[instance]);
+ }
+ else if ( ctxt.cur >= sizeof(*desc) )
+ {
+ rv = -ENOENT;
+ desc = (void *)(ctxt.data);
+ /* Move past header */
+ off = sizeof(*desc);
+ if ( ctxt.cur < desc->length ||
+ off > ctxt.cur - desc->length )
+ rv = -EFAULT;
+ rv = 0;
+ if ( guest_handle_is_null(handle) )
+ *bufsz = desc->length;
+ else if ( *bufsz < desc->length )
+ rv = -ENOBUFS;
+ else if ( copy_to_guest(handle, ctxt.data + off, desc->length) )
+ rv = -EFAULT;
+ else
+ *bufsz = desc->length;
+ vcpu_unpause(d->vcpu[instance]);
}
}
+ else
+ {
+ for_each_vcpu ( d, v )
+ {
+ if ( (rv = hvm_sr_handlers[typecode].save(d, &ctxt)) != 0 )
+ {
+ printk(XENLOG_G_ERR "HVM%d save: failed to save type %"PRIu16" (%d)\n",
+ d->domain_id, typecode, rv);
+ }
+ else if ( ctxt.cur >= sizeof(*desc) )
+ {
+ rv = -ENOENT;
+ desc = (void *)(ctxt.data + off);
+ /* Move past header */
+ off += sizeof(*desc);
+ if ( ctxt.cur < desc->length ||
+ off > ctxt.cur - desc->length )
+ break;
+ if ( instance == desc->instance )
+ {
+ rv = 0;
+ if ( guest_handle_is_null(handle) )
+ *bufsz = desc->length;
+ else if ( *bufsz < desc->length )
+ rv = -ENOBUFS;
+ else if ( copy_to_guest(handle, ctxt.data + off, desc->length) )
+ rv = -EFAULT;
+ else
+ *bufsz = desc->length;
+ break;
+ }
+ off += desc->length;
+ }
+ }
+ domain_unpause(d);
+ }
xfree(ctxt.data);
return rv;
@@ -196,7 +250,9 @@ int hvm_save(struct domain *d, hvm_domain_context_t *h)
struct hvm_save_header hdr;
struct hvm_save_end end;
hvm_save_handler handler;
- unsigned int i;
+ hvm_save_one_handler save_one_handler;
+ unsigned int i, rc;
+ struct vcpu *v = NULL;
if ( d->is_dying )
return -EINVAL;
@@ -224,11 +280,32 @@ int hvm_save(struct domain *d, hvm_domain_context_t *h)
for ( i = 0; i <= HVM_SAVE_CODE_MAX; i++ )
{
handler = hvm_sr_handlers[i].save;
- if ( handler != NULL )
+ save_one_handler = hvm_sr_handlers[i].save_one;
+ if ( save_one_handler != NULL )
{
printk(XENLOG_G_INFO "HVM%d save: %s\n",
d->domain_id, hvm_sr_handlers[i].name);
- if ( handler(d, h) != 0 )
+ for_each_vcpu ( d, v )
+ {
+ rc = save_one_handler(v, h);
+
+ if( rc != 0 )
+ {
+ printk(XENLOG_G_ERR
+ "HVM%d save: failed to save type %"PRIu16"\n",
+ d->domain_id, i);
+ return -EFAULT;
+ }
+ }
+ }
+ else if ( handler != NULL )
+ {
+ printk(XENLOG_G_INFO "HVM%d save: %s\n",
+ d->domain_id, hvm_sr_handlers[i].name);
+
+ rc = handler(d, h);
+
+ if( rc != 0 )
{
printk(XENLOG_G_ERR
"HVM%d save: failed to save type %"PRIu16"\n",
--
2.7.4
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
^ permalink raw reply related [flat|nested] 17+ messages in thread
* [PATCH v12 10/11] x86/hvm: Remove redundant save functions
2018-07-16 14:55 [PATCH v12 00/11] x86/domctl: Save info for one vcpu instance Alexandru Isaila
` (8 preceding siblings ...)
2018-07-16 14:55 ` [PATCH v12 09/11] x86/domctl: Don't pause the whole domain if only getting vcpu state Alexandru Isaila
@ 2018-07-16 14:55 ` Alexandru Isaila
2018-07-16 14:55 ` [PATCH v12 11/11] x86/hvm: Remove save_one handler Alexandru Isaila
10 siblings, 0 replies; 17+ messages in thread
From: Alexandru Isaila @ 2018-07-16 14:55 UTC (permalink / raw)
To: xen-devel
Cc: wei.liu2, andrew.cooper3, ian.jackson, paul.durrant, jbeulich,
Alexandru Isaila
This patch removes the redundant save functions and renames the
save_one* to save. It then changes the domain param to vcpu in the
save funcs.
Signed-off-by: Alexandru Isaila <aisaila@bitdefender.com>
---
Changes since V11:
- Remove enum return type for save funcs.
---
xen/arch/x86/cpu/mcheck/vmce.c | 19 ++---------
xen/arch/x86/hvm/hpet.c | 3 +-
xen/arch/x86/hvm/hvm.c | 75 +++++-------------------------------------
xen/arch/x86/hvm/i8254.c | 3 +-
xen/arch/x86/hvm/irq.c | 9 +++--
xen/arch/x86/hvm/mtrr.c | 19 ++---------
xen/arch/x86/hvm/pmtimer.c | 3 +-
xen/arch/x86/hvm/rtc.c | 3 +-
xen/arch/x86/hvm/save.c | 26 +++------------
xen/arch/x86/hvm/vioapic.c | 3 +-
xen/arch/x86/hvm/viridian.c | 23 +++----------
xen/arch/x86/hvm/vlapic.c | 34 ++++++-------------
xen/arch/x86/hvm/vpic.c | 3 +-
xen/include/asm-x86/hvm/save.h | 2 +-
14 files changed, 50 insertions(+), 175 deletions(-)
diff --git a/xen/arch/x86/cpu/mcheck/vmce.c b/xen/arch/x86/cpu/mcheck/vmce.c
index 35044d7..b53ad7c 100644
--- a/xen/arch/x86/cpu/mcheck/vmce.c
+++ b/xen/arch/x86/cpu/mcheck/vmce.c
@@ -349,7 +349,7 @@ int vmce_wrmsr(uint32_t msr, uint64_t val)
return ret;
}
-static int vmce_save_vcpu_ctxt_one(struct vcpu *v, hvm_domain_context_t *h)
+static int vmce_save_vcpu_ctxt(struct vcpu *v, hvm_domain_context_t *h)
{
struct hvm_vmce_vcpu ctxt = {
.caps = v->arch.vmce.mcg_cap,
@@ -361,21 +361,6 @@ static int vmce_save_vcpu_ctxt_one(struct vcpu *v, hvm_domain_context_t *h)
return hvm_save_entry(VMCE_VCPU, v->vcpu_id, h, &ctxt);
}
-static int vmce_save_vcpu_ctxt(struct domain *d, hvm_domain_context_t *h)
-{
- struct vcpu *v;
- int err = 0;
-
- for_each_vcpu ( d, v )
- {
- err = vmce_save_vcpu_ctxt_one(v, h);
- if ( err )
- break;
- }
-
- return err;
-}
-
static int vmce_load_vcpu_ctxt(struct domain *d, hvm_domain_context_t *h)
{
unsigned int vcpuid = hvm_load_instance(h);
@@ -396,7 +381,7 @@ static int vmce_load_vcpu_ctxt(struct domain *d, hvm_domain_context_t *h)
}
HVM_REGISTER_SAVE_RESTORE(VMCE_VCPU, vmce_save_vcpu_ctxt,
- vmce_save_vcpu_ctxt_one,
+ NULL,
vmce_load_vcpu_ctxt, 1, HVMSR_PER_VCPU);
/*
diff --git a/xen/arch/x86/hvm/hpet.c b/xen/arch/x86/hvm/hpet.c
index aff8613..6e7f744 100644
--- a/xen/arch/x86/hvm/hpet.c
+++ b/xen/arch/x86/hvm/hpet.c
@@ -516,8 +516,9 @@ static const struct hvm_mmio_ops hpet_mmio_ops = {
};
-static int hpet_save(struct domain *d, hvm_domain_context_t *h)
+static int hpet_save(struct vcpu *vcpu, hvm_domain_context_t *h)
{
+ struct domain *d = vcpu->domain;
HPETState *hp = domain_vhpet(d);
struct vcpu *v = pt_global_vcpu_target(d);
int rc;
diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index ac0b496..d7bc877 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -740,7 +740,7 @@ void hvm_domain_destroy(struct domain *d)
destroy_vpci_mmcfg(d);
}
-static int hvm_save_tsc_adjust_one(struct vcpu *v, hvm_domain_context_t *h)
+static int hvm_save_tsc_adjust(struct vcpu *v, hvm_domain_context_t *h)
{
struct hvm_tsc_adjust ctxt = {};
@@ -749,21 +749,6 @@ static int hvm_save_tsc_adjust_one(struct vcpu *v, hvm_domain_context_t *h)
return hvm_save_entry(TSC_ADJUST, v->vcpu_id, h, &ctxt);
}
-static int hvm_save_tsc_adjust(struct domain *d, hvm_domain_context_t *h)
-{
- struct vcpu *v;
- int err = 0;
-
- for_each_vcpu ( d, v )
- {
- err = hvm_save_tsc_adjust_one(v, h);
- if ( err )
- break;
- }
-
- return err;
-}
-
static int hvm_load_tsc_adjust(struct domain *d, hvm_domain_context_t *h)
{
unsigned int vcpuid = hvm_load_instance(h);
@@ -785,10 +770,10 @@ static int hvm_load_tsc_adjust(struct domain *d, hvm_domain_context_t *h)
}
HVM_REGISTER_SAVE_RESTORE(TSC_ADJUST, hvm_save_tsc_adjust,
- hvm_save_tsc_adjust_one,
+ NULL,
hvm_load_tsc_adjust, 1, HVMSR_PER_VCPU);
-static int hvm_save_cpu_ctxt_one(struct vcpu *v, hvm_domain_context_t *h)
+static int hvm_save_cpu_ctxt(struct vcpu *v, hvm_domain_context_t *h)
{
struct segment_register seg;
struct hvm_hw_cpu ctxt;
@@ -899,20 +884,6 @@ static int hvm_save_cpu_ctxt_one(struct vcpu *v, hvm_domain_context_t *h)
return hvm_save_entry(CPU, v->vcpu_id, h, &ctxt);
}
-static int hvm_save_cpu_ctxt(struct domain *d, hvm_domain_context_t *h)
-{
- struct vcpu *v;
- int err = 0;
-
- for_each_vcpu ( d, v )
- {
- err = hvm_save_cpu_ctxt_one(v, h);
- if ( err )
- break;
- }
- return err;
-}
-
/* Return a string indicating the error, or NULL for valid. */
const char *hvm_efer_valid(const struct vcpu *v, uint64_t value,
signed int cr0_pg)
@@ -1184,7 +1155,7 @@ static int hvm_load_cpu_ctxt(struct domain *d, hvm_domain_context_t *h)
return 0;
}
-HVM_REGISTER_SAVE_RESTORE(CPU, hvm_save_cpu_ctxt, hvm_save_cpu_ctxt_one,
+HVM_REGISTER_SAVE_RESTORE(CPU, hvm_save_cpu_ctxt, NULL,
hvm_load_cpu_ctxt,
1, HVMSR_PER_VCPU);
@@ -1192,7 +1163,7 @@ HVM_REGISTER_SAVE_RESTORE(CPU, hvm_save_cpu_ctxt, hvm_save_cpu_ctxt_one,
save_area) + \
xstate_ctxt_size(xcr0))
-static int hvm_save_cpu_xsave_states_one(struct vcpu *v, hvm_domain_context_t *h)
+static int hvm_save_cpu_xsave_states(struct vcpu *v, hvm_domain_context_t *h)
{
struct hvm_hw_cpu_xsave *ctxt;
unsigned int size = HVM_CPU_XSAVE_SIZE(v->arch.xcr0_accum);
@@ -1216,21 +1187,6 @@ static int hvm_save_cpu_xsave_states_one(struct vcpu *v, hvm_domain_context_t *h
return 0;
}
-static int hvm_save_cpu_xsave_states(struct domain *d, hvm_domain_context_t *h)
-{
- struct vcpu *v;
- int err = 0;
-
- for_each_vcpu ( d, v )
- {
- err = hvm_save_cpu_xsave_states_one(v, h);
- if ( err )
- break;
- }
-
- return err;
-}
-
/*
* Structure layout conformity checks, documenting correctness of the cast in
* the invocation of validate_xstate() below.
@@ -1368,7 +1324,7 @@ static const uint32_t msrs_to_send[] = {
};
static unsigned int __read_mostly msr_count_max = ARRAY_SIZE(msrs_to_send);
-static int hvm_save_cpu_msrs_one(struct vcpu *v, hvm_domain_context_t *h)
+static int hvm_save_cpu_msrs(struct vcpu *v, hvm_domain_context_t *h)
{
struct hvm_save_descriptor *desc = _p(&h->data[h->cur]);
struct hvm_msr *ctxt;
@@ -1429,21 +1385,6 @@ static int hvm_save_cpu_msrs_one(struct vcpu *v, hvm_domain_context_t *h)
return 0;
}
-static int hvm_save_cpu_msrs(struct domain *d, hvm_domain_context_t *h)
-{
- struct vcpu *v;
- int err = 0;
-
- for_each_vcpu ( d, v )
- {
- err = hvm_save_cpu_msrs_one(v, h);
- if ( err )
- break;
- }
-
- return err;
-}
-
static int hvm_load_cpu_msrs(struct domain *d, hvm_domain_context_t *h)
{
unsigned int i, vcpuid = hvm_load_instance(h);
@@ -1536,7 +1477,7 @@ static int __init hvm_register_CPU_save_and_restore(void)
hvm_register_savevm(CPU_XSAVE_CODE,
"CPU_XSAVE",
hvm_save_cpu_xsave_states,
- hvm_save_cpu_xsave_states_one,
+ NULL,
hvm_load_cpu_xsave_states,
HVM_CPU_XSAVE_SIZE(xfeature_mask) +
sizeof(struct hvm_save_descriptor),
@@ -1549,7 +1490,7 @@ static int __init hvm_register_CPU_save_and_restore(void)
hvm_register_savevm(CPU_MSR_CODE,
"CPU_MSR",
hvm_save_cpu_msrs,
- hvm_save_cpu_msrs_one,
+ NULL,
hvm_load_cpu_msrs,
HVM_CPU_MSR_SIZE(msr_count_max) +
sizeof(struct hvm_save_descriptor),
diff --git a/xen/arch/x86/hvm/i8254.c b/xen/arch/x86/hvm/i8254.c
index ec77b23..d51463d 100644
--- a/xen/arch/x86/hvm/i8254.c
+++ b/xen/arch/x86/hvm/i8254.c
@@ -390,8 +390,9 @@ void pit_stop_channel0_irq(PITState *pit)
spin_unlock(&pit->lock);
}
-static int pit_save(struct domain *d, hvm_domain_context_t *h)
+static int pit_save(struct vcpu *v, hvm_domain_context_t *h)
{
+ struct domain *d = v->domain;
PITState *pit = domain_vpit(d);
int rc;
diff --git a/xen/arch/x86/hvm/irq.c b/xen/arch/x86/hvm/irq.c
index 770eab7..a405e7f 100644
--- a/xen/arch/x86/hvm/irq.c
+++ b/xen/arch/x86/hvm/irq.c
@@ -630,8 +630,9 @@ static int __init dump_irq_info_key_init(void)
}
__initcall(dump_irq_info_key_init);
-static int irq_save_pci(struct domain *d, hvm_domain_context_t *h)
+static int irq_save_pci(struct vcpu *v, hvm_domain_context_t *h)
{
+ struct domain *d = v->domain;
struct hvm_irq *hvm_irq = hvm_domain_irq(d);
unsigned int asserted, pdev, pintx;
int rc;
@@ -662,16 +663,18 @@ static int irq_save_pci(struct domain *d, hvm_domain_context_t *h)
return rc;
}
-static int irq_save_isa(struct domain *d, hvm_domain_context_t *h)
+static int irq_save_isa(struct vcpu *v, hvm_domain_context_t *h)
{
+ struct domain *d = v->domain;
struct hvm_irq *hvm_irq = hvm_domain_irq(d);
/* Save ISA IRQ lines */
return ( hvm_save_entry(ISA_IRQ, 0, h, &hvm_irq->isa_irq) );
}
-static int irq_save_link(struct domain *d, hvm_domain_context_t *h)
+static int irq_save_link(struct vcpu *v, hvm_domain_context_t *h)
{
+ struct domain *d = v->domain;
struct hvm_irq *hvm_irq = hvm_domain_irq(d);
/* Save PCI-ISA link state */
diff --git a/xen/arch/x86/hvm/mtrr.c b/xen/arch/x86/hvm/mtrr.c
index 1cb2a2e..b7c9bd6 100644
--- a/xen/arch/x86/hvm/mtrr.c
+++ b/xen/arch/x86/hvm/mtrr.c
@@ -718,7 +718,7 @@ int hvm_set_mem_pinned_cacheattr(struct domain *d, uint64_t gfn_start,
return 0;
}
-static int hvm_save_mtrr_msr_one(struct vcpu *v, hvm_domain_context_t *h)
+static int hvm_save_mtrr_msr(struct vcpu *v, hvm_domain_context_t *h)
{
const struct mtrr_state *mtrr_state = &v->arch.hvm_vcpu.mtrr;
struct hvm_hw_mtrr hw_mtrr = {
@@ -758,21 +758,6 @@ static int hvm_save_mtrr_msr_one(struct vcpu *v, hvm_domain_context_t *h)
return hvm_save_entry(MTRR, v->vcpu_id, h, &hw_mtrr);
}
-static int hvm_save_mtrr_msr(struct domain *d, hvm_domain_context_t *h)
-{
- struct vcpu *v;
- int err = 0;
-
- /* save mtrr&pat */
- for_each_vcpu(d, v)
- {
- err = hvm_save_mtrr_msr_one(v, h);
- if ( err )
- break;
- }
- return err;
-}
-
static int hvm_load_mtrr_msr(struct domain *d, hvm_domain_context_t *h)
{
int vcpuid, i;
@@ -823,7 +808,7 @@ static int hvm_load_mtrr_msr(struct domain *d, hvm_domain_context_t *h)
return 0;
}
-HVM_REGISTER_SAVE_RESTORE(MTRR, hvm_save_mtrr_msr, hvm_save_mtrr_msr_one,
+HVM_REGISTER_SAVE_RESTORE(MTRR, hvm_save_mtrr_msr, NULL,
hvm_load_mtrr_msr, 1, HVMSR_PER_VCPU);
void memory_type_changed(struct domain *d)
diff --git a/xen/arch/x86/hvm/pmtimer.c b/xen/arch/x86/hvm/pmtimer.c
index 0a5e8ce..461d2df 100644
--- a/xen/arch/x86/hvm/pmtimer.c
+++ b/xen/arch/x86/hvm/pmtimer.c
@@ -249,8 +249,9 @@ static int handle_pmt_io(
return X86EMUL_OKAY;
}
-static int acpi_save(struct domain *d, hvm_domain_context_t *h)
+static int acpi_save(struct vcpu *v, hvm_domain_context_t *h)
{
+ struct domain *d = v->domain;
struct hvm_hw_acpi *acpi = &d->arch.hvm_domain.acpi;
PMTState *s = &d->arch.hvm_domain.pl_time->vpmt;
uint32_t x, msb = acpi->tmr_val & TMR_VAL_MSB;
diff --git a/xen/arch/x86/hvm/rtc.c b/xen/arch/x86/hvm/rtc.c
index ce7e71b..ea2fbd3 100644
--- a/xen/arch/x86/hvm/rtc.c
+++ b/xen/arch/x86/hvm/rtc.c
@@ -737,8 +737,9 @@ void rtc_migrate_timers(struct vcpu *v)
}
/* Save RTC hardware state */
-static int rtc_save(struct domain *d, hvm_domain_context_t *h)
+static int rtc_save(struct vcpu *v, hvm_domain_context_t *h)
{
+ struct domain *d = v->domain;
RTCState *s = domain_vrtc(d);
int rc;
diff --git a/xen/arch/x86/hvm/save.c b/xen/arch/x86/hvm/save.c
index d57648d..c579e49 100644
--- a/xen/arch/x86/hvm/save.c
+++ b/xen/arch/x86/hvm/save.c
@@ -174,7 +174,7 @@ int hvm_save_one(struct domain *d, unsigned int typecode, unsigned int instance,
rv = hvm_sr_handlers[typecode].save_one(d->vcpu[instance],
&ctxt);
else
- rv = hvm_sr_handlers[typecode].save(d, &ctxt);
+ rv = hvm_sr_handlers[typecode].save(d->vcpu[instance], &ctxt);
if ( rv != 0 )
{
@@ -207,7 +207,8 @@ int hvm_save_one(struct domain *d, unsigned int typecode, unsigned int instance,
{
for_each_vcpu ( d, v )
{
- if ( (rv = hvm_sr_handlers[typecode].save(d, &ctxt)) != 0 )
+ if ( (rv = hvm_sr_handlers[typecode].save(d->vcpu[instance],
+ &ctxt)) != 0 )
{
printk(XENLOG_G_ERR "HVM%d save: failed to save type %"PRIu16" (%d)\n",
d->domain_id, typecode, rv);
@@ -250,7 +251,6 @@ int hvm_save(struct domain *d, hvm_domain_context_t *h)
struct hvm_save_header hdr;
struct hvm_save_end end;
hvm_save_handler handler;
- hvm_save_one_handler save_one_handler;
unsigned int i, rc;
struct vcpu *v = NULL;
@@ -280,14 +280,13 @@ int hvm_save(struct domain *d, hvm_domain_context_t *h)
for ( i = 0; i <= HVM_SAVE_CODE_MAX; i++ )
{
handler = hvm_sr_handlers[i].save;
- save_one_handler = hvm_sr_handlers[i].save_one;
- if ( save_one_handler != NULL )
+ if ( handler != NULL )
{
printk(XENLOG_G_INFO "HVM%d save: %s\n",
d->domain_id, hvm_sr_handlers[i].name);
for_each_vcpu ( d, v )
{
- rc = save_one_handler(v, h);
+ rc = handler(v, h);
if( rc != 0 )
{
@@ -298,21 +297,6 @@ int hvm_save(struct domain *d, hvm_domain_context_t *h)
}
}
}
- else if ( handler != NULL )
- {
- printk(XENLOG_G_INFO "HVM%d save: %s\n",
- d->domain_id, hvm_sr_handlers[i].name);
-
- rc = handler(d, h);
-
- if( rc != 0 )
- {
- printk(XENLOG_G_ERR
- "HVM%d save: failed to save type %"PRIu16"\n",
- d->domain_id, i);
- return -EFAULT;
- }
- }
}
/* Save an end-of-file marker */
diff --git a/xen/arch/x86/hvm/vioapic.c b/xen/arch/x86/hvm/vioapic.c
index 66f54e4..cec4b1b 100644
--- a/xen/arch/x86/hvm/vioapic.c
+++ b/xen/arch/x86/hvm/vioapic.c
@@ -569,8 +569,9 @@ int vioapic_get_trigger_mode(const struct domain *d, unsigned int gsi)
return vioapic->redirtbl[pin].fields.trig_mode;
}
-static int ioapic_save(struct domain *d, hvm_domain_context_t *h)
+static int ioapic_save(struct vcpu *v, hvm_domain_context_t *h)
{
+ struct domain *d = v->domain;
struct hvm_vioapic *s;
if ( !has_vioapic(d) )
diff --git a/xen/arch/x86/hvm/viridian.c b/xen/arch/x86/hvm/viridian.c
index 1571336..1991829 100644
--- a/xen/arch/x86/hvm/viridian.c
+++ b/xen/arch/x86/hvm/viridian.c
@@ -990,8 +990,9 @@ out:
return HVM_HCALL_completed;
}
-static int viridian_save_domain_ctxt(struct domain *d, hvm_domain_context_t *h)
+static int viridian_save_domain_ctxt(struct vcpu *v, hvm_domain_context_t *h)
{
+ struct domain *d = v->domain;
struct hvm_viridian_domain_context ctxt = {
.time_ref_count = d->arch.hvm_domain.viridian.time_ref_count.val,
.hypercall_gpa = d->arch.hvm_domain.viridian.hypercall_gpa.raw,
@@ -1026,7 +1027,7 @@ static int viridian_load_domain_ctxt(struct domain *d, hvm_domain_context_t *h)
HVM_REGISTER_SAVE_RESTORE(VIRIDIAN_DOMAIN, viridian_save_domain_ctxt, NULL,
viridian_load_domain_ctxt, 1, HVMSR_PER_DOM);
-static int viridian_save_vcpu_ctxt_one(struct vcpu *v, hvm_domain_context_t *h)
+static int viridian_save_vcpu_ctxt(struct vcpu *v, hvm_domain_context_t *h)
{
struct hvm_viridian_vcpu_context ctxt = {};
@@ -1039,22 +1040,6 @@ static int viridian_save_vcpu_ctxt_one(struct vcpu *v, hvm_domain_context_t *h)
return hvm_save_entry(VIRIDIAN_VCPU, v->vcpu_id, h, &ctxt);
}
-static int viridian_save_vcpu_ctxt(struct domain *d, hvm_domain_context_t *h)
-{
- struct vcpu *v;
- int err = 0;
-
- for_each_vcpu( d, v ) {
-
- err = viridian_save_vcpu_ctxt_one(v, h);
-
- if ( err )
- break;
- }
-
- return err;
-}
-
static int viridian_load_vcpu_ctxt(struct domain *d, hvm_domain_context_t *h)
{
int vcpuid;
@@ -1086,7 +1071,7 @@ static int viridian_load_vcpu_ctxt(struct domain *d, hvm_domain_context_t *h)
}
HVM_REGISTER_SAVE_RESTORE(VIRIDIAN_VCPU, viridian_save_vcpu_ctxt,
- viridian_save_vcpu_ctxt_one,
+ NULL,
viridian_load_vcpu_ctxt, 1, HVMSR_PER_VCPU);
static int __init parse_viridian_version(const char *arg)
diff --git a/xen/arch/x86/hvm/vlapic.c b/xen/arch/x86/hvm/vlapic.c
index eff6070..e51dff7 100644
--- a/xen/arch/x86/hvm/vlapic.c
+++ b/xen/arch/x86/hvm/vlapic.c
@@ -1435,45 +1435,31 @@ static void lapic_rearm(struct vlapic *s)
s->timer_last_update = s->pt.last_plt_gtime;
}
-static int lapic_save_hidden(struct domain *d, hvm_domain_context_t *h)
+static int lapic_save_hidden(struct vcpu *v, hvm_domain_context_t *h)
{
- struct vcpu *v;
+ struct domain *d = v->domain;
struct vlapic *s;
- int rc = 0;
if ( !has_vlapic(d) )
return 0;
- for_each_vcpu ( d, v )
- {
- s = vcpu_vlapic(v);
- if ( (rc = hvm_save_entry(LAPIC, v->vcpu_id, h, &s->hw)) != 0 )
- break;
- }
-
- return rc;
+ s = vcpu_vlapic(v);
+ return hvm_save_entry(LAPIC, v->vcpu_id, h, &s->hw);
}
-static int lapic_save_regs(struct domain *d, hvm_domain_context_t *h)
+static int lapic_save_regs(struct vcpu *v, hvm_domain_context_t *h)
{
- struct vcpu *v;
+ struct domain *d = v->domain;
struct vlapic *s;
- int rc = 0;
if ( !has_vlapic(d) )
return 0;
- for_each_vcpu ( d, v )
- {
- if ( hvm_funcs.sync_pir_to_irr )
- hvm_funcs.sync_pir_to_irr(v);
-
- s = vcpu_vlapic(v);
- if ( (rc = hvm_save_entry(LAPIC_REGS, v->vcpu_id, h, s->regs)) != 0 )
- break;
- }
+ if ( hvm_funcs.sync_pir_to_irr )
+ hvm_funcs.sync_pir_to_irr(v);
- return rc;
+ s = vcpu_vlapic(v);
+ return hvm_save_entry(LAPIC_REGS, v->vcpu_id, h, s->regs);
}
/*
diff --git a/xen/arch/x86/hvm/vpic.c b/xen/arch/x86/hvm/vpic.c
index ca9b4cb..e03d8cf 100644
--- a/xen/arch/x86/hvm/vpic.c
+++ b/xen/arch/x86/hvm/vpic.c
@@ -371,8 +371,9 @@ static int vpic_intercept_elcr_io(
return X86EMUL_OKAY;
}
-static int vpic_save(struct domain *d, hvm_domain_context_t *h)
+static int vpic_save(struct vcpu *v, hvm_domain_context_t *h)
{
+ struct domain *d = v->domain;
struct hvm_hw_vpic *s;
int i;
diff --git a/xen/include/asm-x86/hvm/save.h b/xen/include/asm-x86/hvm/save.h
index 2538628..e64b4df 100644
--- a/xen/include/asm-x86/hvm/save.h
+++ b/xen/include/asm-x86/hvm/save.h
@@ -95,7 +95,7 @@ static inline uint16_t hvm_load_instance(struct hvm_domain_context *h)
* The save handler may save multiple instances of a type into the buffer;
* the load handler will be called once for each instance found when
* restoring. Both return non-zero on error. */
-typedef int (*hvm_save_handler) (struct domain *d,
+typedef int (*hvm_save_handler) (struct vcpu *v,
hvm_domain_context_t *h);
typedef int (*hvm_save_one_handler)(struct vcpu *v,
hvm_domain_context_t *h);
--
2.7.4
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
^ permalink raw reply related [flat|nested] 17+ messages in thread
* [PATCH v12 11/11] x86/hvm: Remove save_one handler
2018-07-16 14:55 [PATCH v12 00/11] x86/domctl: Save info for one vcpu instance Alexandru Isaila
` (9 preceding siblings ...)
2018-07-16 14:55 ` [PATCH v12 10/11] x86/hvm: Remove redundant save functions Alexandru Isaila
@ 2018-07-16 14:55 ` Alexandru Isaila
10 siblings, 0 replies; 17+ messages in thread
From: Alexandru Isaila @ 2018-07-16 14:55 UTC (permalink / raw)
To: xen-devel
Cc: wei.liu2, andrew.cooper3, ian.jackson, paul.durrant, jbeulich,
Alexandru Isaila
Signed-off-by: Alexandru Isaila <aisaila@bitdefender.com>
---
xen/arch/x86/cpu/mcheck/vmce.c | 1 -
xen/arch/x86/hvm/hpet.c | 2 +-
xen/arch/x86/hvm/hvm.c | 5 +----
xen/arch/x86/hvm/i8254.c | 2 +-
xen/arch/x86/hvm/irq.c | 6 +++---
xen/arch/x86/hvm/mtrr.c | 2 +-
xen/arch/x86/hvm/pmtimer.c | 2 +-
xen/arch/x86/hvm/rtc.c | 2 +-
xen/arch/x86/hvm/save.c | 14 +++-----------
xen/arch/x86/hvm/vioapic.c | 2 +-
xen/arch/x86/hvm/viridian.c | 3 +--
xen/arch/x86/hvm/vlapic.c | 4 ++--
xen/arch/x86/hvm/vpic.c | 2 +-
xen/include/asm-x86/hvm/save.h | 6 +-----
14 files changed, 18 insertions(+), 35 deletions(-)
diff --git a/xen/arch/x86/cpu/mcheck/vmce.c b/xen/arch/x86/cpu/mcheck/vmce.c
index b53ad7c..763d56b 100644
--- a/xen/arch/x86/cpu/mcheck/vmce.c
+++ b/xen/arch/x86/cpu/mcheck/vmce.c
@@ -381,7 +381,6 @@ static int vmce_load_vcpu_ctxt(struct domain *d, hvm_domain_context_t *h)
}
HVM_REGISTER_SAVE_RESTORE(VMCE_VCPU, vmce_save_vcpu_ctxt,
- NULL,
vmce_load_vcpu_ctxt, 1, HVMSR_PER_VCPU);
/*
diff --git a/xen/arch/x86/hvm/hpet.c b/xen/arch/x86/hvm/hpet.c
index 6e7f744..3ed6547 100644
--- a/xen/arch/x86/hvm/hpet.c
+++ b/xen/arch/x86/hvm/hpet.c
@@ -641,7 +641,7 @@ static int hpet_load(struct domain *d, hvm_domain_context_t *h)
return 0;
}
-HVM_REGISTER_SAVE_RESTORE(HPET, hpet_save, NULL, hpet_load, 1, HVMSR_PER_DOM);
+HVM_REGISTER_SAVE_RESTORE(HPET, hpet_save, hpet_load, 1, HVMSR_PER_DOM);
static void hpet_set(HPETState *h)
{
diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index d7bc877..37553d0 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -770,7 +770,6 @@ static int hvm_load_tsc_adjust(struct domain *d, hvm_domain_context_t *h)
}
HVM_REGISTER_SAVE_RESTORE(TSC_ADJUST, hvm_save_tsc_adjust,
- NULL,
hvm_load_tsc_adjust, 1, HVMSR_PER_VCPU);
static int hvm_save_cpu_ctxt(struct vcpu *v, hvm_domain_context_t *h)
@@ -1155,7 +1154,7 @@ static int hvm_load_cpu_ctxt(struct domain *d, hvm_domain_context_t *h)
return 0;
}
-HVM_REGISTER_SAVE_RESTORE(CPU, hvm_save_cpu_ctxt, NULL,
+HVM_REGISTER_SAVE_RESTORE(CPU, hvm_save_cpu_ctxt,
hvm_load_cpu_ctxt,
1, HVMSR_PER_VCPU);
@@ -1477,7 +1476,6 @@ static int __init hvm_register_CPU_save_and_restore(void)
hvm_register_savevm(CPU_XSAVE_CODE,
"CPU_XSAVE",
hvm_save_cpu_xsave_states,
- NULL,
hvm_load_cpu_xsave_states,
HVM_CPU_XSAVE_SIZE(xfeature_mask) +
sizeof(struct hvm_save_descriptor),
@@ -1490,7 +1488,6 @@ static int __init hvm_register_CPU_save_and_restore(void)
hvm_register_savevm(CPU_MSR_CODE,
"CPU_MSR",
hvm_save_cpu_msrs,
- NULL,
hvm_load_cpu_msrs,
HVM_CPU_MSR_SIZE(msr_count_max) +
sizeof(struct hvm_save_descriptor),
diff --git a/xen/arch/x86/hvm/i8254.c b/xen/arch/x86/hvm/i8254.c
index d51463d..e0d2255 100644
--- a/xen/arch/x86/hvm/i8254.c
+++ b/xen/arch/x86/hvm/i8254.c
@@ -438,7 +438,7 @@ static int pit_load(struct domain *d, hvm_domain_context_t *h)
return 0;
}
-HVM_REGISTER_SAVE_RESTORE(PIT, pit_save, NULL, pit_load, 1, HVMSR_PER_DOM);
+HVM_REGISTER_SAVE_RESTORE(PIT, pit_save, pit_load, 1, HVMSR_PER_DOM);
void pit_reset(struct domain *d)
{
diff --git a/xen/arch/x86/hvm/irq.c b/xen/arch/x86/hvm/irq.c
index a405e7f..b37275c 100644
--- a/xen/arch/x86/hvm/irq.c
+++ b/xen/arch/x86/hvm/irq.c
@@ -767,9 +767,9 @@ static int irq_load_link(struct domain *d, hvm_domain_context_t *h)
return 0;
}
-HVM_REGISTER_SAVE_RESTORE(PCI_IRQ, irq_save_pci, NULL, irq_load_pci,
+HVM_REGISTER_SAVE_RESTORE(PCI_IRQ, irq_save_pci, irq_load_pci,
1, HVMSR_PER_DOM);
-HVM_REGISTER_SAVE_RESTORE(ISA_IRQ, irq_save_isa, NULL, irq_load_isa,
+HVM_REGISTER_SAVE_RESTORE(ISA_IRQ, irq_save_isa, irq_load_isa,
1, HVMSR_PER_DOM);
-HVM_REGISTER_SAVE_RESTORE(PCI_LINK, irq_save_link, NULL, irq_load_link,
+HVM_REGISTER_SAVE_RESTORE(PCI_LINK, irq_save_link, irq_load_link,
1, HVMSR_PER_DOM);
diff --git a/xen/arch/x86/hvm/mtrr.c b/xen/arch/x86/hvm/mtrr.c
index b7c9bd6..d9a4532 100644
--- a/xen/arch/x86/hvm/mtrr.c
+++ b/xen/arch/x86/hvm/mtrr.c
@@ -808,7 +808,7 @@ static int hvm_load_mtrr_msr(struct domain *d, hvm_domain_context_t *h)
return 0;
}
-HVM_REGISTER_SAVE_RESTORE(MTRR, hvm_save_mtrr_msr, NULL,
+HVM_REGISTER_SAVE_RESTORE(MTRR, hvm_save_mtrr_msr,
hvm_load_mtrr_msr, 1, HVMSR_PER_VCPU);
void memory_type_changed(struct domain *d)
diff --git a/xen/arch/x86/hvm/pmtimer.c b/xen/arch/x86/hvm/pmtimer.c
index 461d2df..d8dcbc2 100644
--- a/xen/arch/x86/hvm/pmtimer.c
+++ b/xen/arch/x86/hvm/pmtimer.c
@@ -310,7 +310,7 @@ static int acpi_load(struct domain *d, hvm_domain_context_t *h)
return 0;
}
-HVM_REGISTER_SAVE_RESTORE(PMTIMER, acpi_save, NULL, acpi_load,
+HVM_REGISTER_SAVE_RESTORE(PMTIMER, acpi_save, acpi_load,
1, HVMSR_PER_DOM);
int pmtimer_change_ioport(struct domain *d, unsigned int version)
diff --git a/xen/arch/x86/hvm/rtc.c b/xen/arch/x86/hvm/rtc.c
index ea2fbd3..58b70fc 100644
--- a/xen/arch/x86/hvm/rtc.c
+++ b/xen/arch/x86/hvm/rtc.c
@@ -784,7 +784,7 @@ static int rtc_load(struct domain *d, hvm_domain_context_t *h)
return 0;
}
-HVM_REGISTER_SAVE_RESTORE(RTC, rtc_save, NULL, rtc_load, 1, HVMSR_PER_DOM);
+HVM_REGISTER_SAVE_RESTORE(RTC, rtc_save, rtc_load, 1, HVMSR_PER_DOM);
void rtc_reset(struct domain *d)
{
diff --git a/xen/arch/x86/hvm/save.c b/xen/arch/x86/hvm/save.c
index c579e49..01e3cbe 100644
--- a/xen/arch/x86/hvm/save.c
+++ b/xen/arch/x86/hvm/save.c
@@ -85,18 +85,16 @@ int arch_hvm_load(struct domain *d, struct hvm_save_header *hdr)
/* List of handlers for various HVM save and restore types */
static struct {
hvm_save_handler save;
- hvm_save_one_handler save_one;
hvm_load_handler load;
const char *name;
size_t size;
int kind;
-} hvm_sr_handlers[HVM_SAVE_CODE_MAX + 1] = { {NULL, NULL, NULL, "<?>"}, };
+} hvm_sr_handlers[HVM_SAVE_CODE_MAX + 1] = { {NULL, NULL, "<?>"}, };
/* Init-time function to add entries to that list */
void __init hvm_register_savevm(uint16_t typecode,
const char *name,
hvm_save_handler save_state,
- hvm_save_one_handler save_one,
hvm_load_handler load_state,
size_t size, int kind)
{
@@ -104,7 +102,6 @@ void __init hvm_register_savevm(uint16_t typecode,
ASSERT(hvm_sr_handlers[typecode].save == NULL);
ASSERT(hvm_sr_handlers[typecode].load == NULL);
hvm_sr_handlers[typecode].save = save_state;
- hvm_sr_handlers[typecode].save_one = save_one;
hvm_sr_handlers[typecode].load = load_state;
hvm_sr_handlers[typecode].name = name;
hvm_sr_handlers[typecode].size = size;
@@ -170,13 +167,8 @@ int hvm_save_one(struct domain *d, unsigned int typecode, unsigned int instance,
if ( is_single_instance )
{
- if ( hvm_sr_handlers[typecode].save_one != NULL )
- rv = hvm_sr_handlers[typecode].save_one(d->vcpu[instance],
- &ctxt);
- else
- rv = hvm_sr_handlers[typecode].save(d->vcpu[instance], &ctxt);
-
- if ( rv != 0 )
+ if ( (rv = hvm_sr_handlers[typecode].save(d->vcpu[instance],
+ &ctxt)) != 0 )
{
printk(XENLOG_G_ERR "HVM%d save: failed to save type %"PRIu16" (%d)\n",
d->domain_id, typecode, rv);
diff --git a/xen/arch/x86/hvm/vioapic.c b/xen/arch/x86/hvm/vioapic.c
index cec4b1b..86d02cf 100644
--- a/xen/arch/x86/hvm/vioapic.c
+++ b/xen/arch/x86/hvm/vioapic.c
@@ -602,7 +602,7 @@ static int ioapic_load(struct domain *d, hvm_domain_context_t *h)
return hvm_load_entry(IOAPIC, h, &s->domU);
}
-HVM_REGISTER_SAVE_RESTORE(IOAPIC, ioapic_save, NULL, ioapic_load, 1, HVMSR_PER_DOM);
+HVM_REGISTER_SAVE_RESTORE(IOAPIC, ioapic_save, ioapic_load, 1, HVMSR_PER_DOM);
void vioapic_reset(struct domain *d)
{
diff --git a/xen/arch/x86/hvm/viridian.c b/xen/arch/x86/hvm/viridian.c
index 1991829..35616e5 100644
--- a/xen/arch/x86/hvm/viridian.c
+++ b/xen/arch/x86/hvm/viridian.c
@@ -1024,7 +1024,7 @@ static int viridian_load_domain_ctxt(struct domain *d, hvm_domain_context_t *h)
return 0;
}
-HVM_REGISTER_SAVE_RESTORE(VIRIDIAN_DOMAIN, viridian_save_domain_ctxt, NULL,
+HVM_REGISTER_SAVE_RESTORE(VIRIDIAN_DOMAIN, viridian_save_domain_ctxt,
viridian_load_domain_ctxt, 1, HVMSR_PER_DOM);
static int viridian_save_vcpu_ctxt(struct vcpu *v, hvm_domain_context_t *h)
@@ -1071,7 +1071,6 @@ static int viridian_load_vcpu_ctxt(struct domain *d, hvm_domain_context_t *h)
}
HVM_REGISTER_SAVE_RESTORE(VIRIDIAN_VCPU, viridian_save_vcpu_ctxt,
- NULL,
viridian_load_vcpu_ctxt, 1, HVMSR_PER_VCPU);
static int __init parse_viridian_version(const char *arg)
diff --git a/xen/arch/x86/hvm/vlapic.c b/xen/arch/x86/hvm/vlapic.c
index e51dff7..6337cdb 100644
--- a/xen/arch/x86/hvm/vlapic.c
+++ b/xen/arch/x86/hvm/vlapic.c
@@ -1562,9 +1562,9 @@ static int lapic_load_regs(struct domain *d, hvm_domain_context_t *h)
return 0;
}
-HVM_REGISTER_SAVE_RESTORE(LAPIC, lapic_save_hidden, NULL, lapic_load_hidden,
+HVM_REGISTER_SAVE_RESTORE(LAPIC, lapic_save_hidden, lapic_load_hidden,
1, HVMSR_PER_VCPU);
-HVM_REGISTER_SAVE_RESTORE(LAPIC_REGS, lapic_save_regs, NULL, lapic_load_regs,
+HVM_REGISTER_SAVE_RESTORE(LAPIC_REGS, lapic_save_regs, lapic_load_regs,
1, HVMSR_PER_VCPU);
int vlapic_init(struct vcpu *v)
diff --git a/xen/arch/x86/hvm/vpic.c b/xen/arch/x86/hvm/vpic.c
index e03d8cf..bad5066 100644
--- a/xen/arch/x86/hvm/vpic.c
+++ b/xen/arch/x86/hvm/vpic.c
@@ -412,7 +412,7 @@ static int vpic_load(struct domain *d, hvm_domain_context_t *h)
return 0;
}
-HVM_REGISTER_SAVE_RESTORE(PIC, vpic_save, NULL, vpic_load, 2, HVMSR_PER_DOM);
+HVM_REGISTER_SAVE_RESTORE(PIC, vpic_save, vpic_load, 2, HVMSR_PER_DOM);
void vpic_reset(struct domain *d)
{
diff --git a/xen/include/asm-x86/hvm/save.h b/xen/include/asm-x86/hvm/save.h
index e64b4df..f7d31ad 100644
--- a/xen/include/asm-x86/hvm/save.h
+++ b/xen/include/asm-x86/hvm/save.h
@@ -97,8 +97,6 @@ static inline uint16_t hvm_load_instance(struct hvm_domain_context *h)
* restoring. Both return non-zero on error. */
typedef int (*hvm_save_handler) (struct vcpu *v,
hvm_domain_context_t *h);
-typedef int (*hvm_save_one_handler)(struct vcpu *v,
- hvm_domain_context_t *h);
typedef int (*hvm_load_handler) (struct domain *d,
hvm_domain_context_t *h);
@@ -107,7 +105,6 @@ typedef int (*hvm_load_handler) (struct domain *d,
void hvm_register_savevm(uint16_t typecode,
const char *name,
hvm_save_handler save_state,
- hvm_save_one_handler save_one,
hvm_load_handler load_state,
size_t size, int kind);
@@ -117,13 +114,12 @@ void hvm_register_savevm(uint16_t typecode,
/* Syntactic sugar around that function: specify the max number of
* saves, and this calculates the size of buffer needed */
-#define HVM_REGISTER_SAVE_RESTORE(_x, _save, _save_one, _load, _num, _k) \
+#define HVM_REGISTER_SAVE_RESTORE(_x, _save, _load, _num, _k) \
static int __init __hvm_register_##_x##_save_and_restore(void) \
{ \
hvm_register_savevm(HVM_SAVE_CODE(_x), \
#_x, \
&_save, \
- _save_one, \
&_load, \
(_num) * (HVM_SAVE_LENGTH(_x) \
+ sizeof (struct hvm_save_descriptor)), \
--
2.7.4
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
^ permalink raw reply related [flat|nested] 17+ messages in thread
* Re: [PATCH v12 03/11] x86/hvm: Introduce hvm_save_cpu_ctxt_one func
2018-07-16 14:55 ` [PATCH v12 03/11] x86/hvm: Introduce hvm_save_cpu_ctxt_one func Alexandru Isaila
@ 2018-07-16 15:29 ` Paul Durrant
2018-07-17 12:25 ` Isaila Alexandru
0 siblings, 1 reply; 17+ messages in thread
From: Paul Durrant @ 2018-07-16 15:29 UTC (permalink / raw)
To: 'Alexandru Isaila', xen-devel@lists.xen.org
Cc: Ian Jackson, Wei Liu, jbeulich@suse.com, Andrew Cooper
> -----Original Message-----
> From: Alexandru Isaila [mailto:aisaila@bitdefender.com]
> Sent: 16 July 2018 15:55
> To: xen-devel@lists.xen.org
> Cc: Ian Jackson <Ian.Jackson@citrix.com>; Wei Liu <wei.liu2@citrix.com>;
> jbeulich@suse.com; Andrew Cooper <Andrew.Cooper3@citrix.com>; Paul
> Durrant <Paul.Durrant@citrix.com>; Alexandru Isaila
> <aisaila@bitdefender.com>
> Subject: [PATCH v12 03/11] x86/hvm: Introduce hvm_save_cpu_ctxt_one
> func
>
> This is used to save data from a single instance.
>
> Signed-off-by: Alexandru Isaila <aisaila@bitdefender.com>
>
> ---
> Changes since V11:
> - hvm_save_cpu_ctxt() now returns err from
> hvm_save_cpu_ctxt_one().
> ---
> xen/arch/x86/hvm/hvm.c | 216 ++++++++++++++++++++++++++------------
> -----------
> 1 file changed, 113 insertions(+), 103 deletions(-)
>
> diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
> index dd88751..e20a25c 100644
> --- a/xen/arch/x86/hvm/hvm.c
> +++ b/xen/arch/x86/hvm/hvm.c
> @@ -787,119 +787,129 @@ static int hvm_load_tsc_adjust(struct domain *d,
> hvm_domain_context_t *h)
> HVM_REGISTER_SAVE_RESTORE(TSC_ADJUST, hvm_save_tsc_adjust,
> hvm_load_tsc_adjust, 1, HVMSR_PER_VCPU);
>
> +static int hvm_save_cpu_ctxt_one(struct vcpu *v, hvm_domain_context_t
> *h)
> +{
> + struct segment_register seg;
> + struct hvm_hw_cpu ctxt;
> +
> + memset(&ctxt, 0, sizeof(ctxt));
Why not use an = {} initializer instead of the memset here like elsewhere?
Paul
> +
> + /* Architecture-specific vmcs/vmcb bits */
> + hvm_funcs.save_cpu_ctxt(v, &ctxt);
> +
> + ctxt.tsc = hvm_get_guest_tsc_fixed(v, v->domain-
> >arch.hvm_domain.sync_tsc);
> +
> + ctxt.msr_tsc_aux = hvm_msr_tsc_aux(v);
> +
> + hvm_get_segment_register(v, x86_seg_idtr, &seg);
> + ctxt.idtr_limit = seg.limit;
> + ctxt.idtr_base = seg.base;
> +
> + hvm_get_segment_register(v, x86_seg_gdtr, &seg);
> + ctxt.gdtr_limit = seg.limit;
> + ctxt.gdtr_base = seg.base;
> +
> + hvm_get_segment_register(v, x86_seg_cs, &seg);
> + ctxt.cs_sel = seg.sel;
> + ctxt.cs_limit = seg.limit;
> + ctxt.cs_base = seg.base;
> + ctxt.cs_arbytes = seg.attr;
> +
> + hvm_get_segment_register(v, x86_seg_ds, &seg);
> + ctxt.ds_sel = seg.sel;
> + ctxt.ds_limit = seg.limit;
> + ctxt.ds_base = seg.base;
> + ctxt.ds_arbytes = seg.attr;
> +
> + hvm_get_segment_register(v, x86_seg_es, &seg);
> + ctxt.es_sel = seg.sel;
> + ctxt.es_limit = seg.limit;
> + ctxt.es_base = seg.base;
> + ctxt.es_arbytes = seg.attr;
> +
> + hvm_get_segment_register(v, x86_seg_ss, &seg);
> + ctxt.ss_sel = seg.sel;
> + ctxt.ss_limit = seg.limit;
> + ctxt.ss_base = seg.base;
> + ctxt.ss_arbytes = seg.attr;
> +
> + hvm_get_segment_register(v, x86_seg_fs, &seg);
> + ctxt.fs_sel = seg.sel;
> + ctxt.fs_limit = seg.limit;
> + ctxt.fs_base = seg.base;
> + ctxt.fs_arbytes = seg.attr;
> +
> + hvm_get_segment_register(v, x86_seg_gs, &seg);
> + ctxt.gs_sel = seg.sel;
> + ctxt.gs_limit = seg.limit;
> + ctxt.gs_base = seg.base;
> + ctxt.gs_arbytes = seg.attr;
> +
> + hvm_get_segment_register(v, x86_seg_tr, &seg);
> + ctxt.tr_sel = seg.sel;
> + ctxt.tr_limit = seg.limit;
> + ctxt.tr_base = seg.base;
> + ctxt.tr_arbytes = seg.attr;
> +
> + hvm_get_segment_register(v, x86_seg_ldtr, &seg);
> + ctxt.ldtr_sel = seg.sel;
> + ctxt.ldtr_limit = seg.limit;
> + ctxt.ldtr_base = seg.base;
> + ctxt.ldtr_arbytes = seg.attr;
> +
> + if ( v->fpu_initialised )
> + {
> + memcpy(ctxt.fpu_regs, v->arch.fpu_ctxt, sizeof(ctxt.fpu_regs));
> + ctxt.flags = XEN_X86_FPU_INITIALISED;
> + }
> +
> + ctxt.rax = v->arch.user_regs.rax;
> + ctxt.rbx = v->arch.user_regs.rbx;
> + ctxt.rcx = v->arch.user_regs.rcx;
> + ctxt.rdx = v->arch.user_regs.rdx;
> + ctxt.rbp = v->arch.user_regs.rbp;
> + ctxt.rsi = v->arch.user_regs.rsi;
> + ctxt.rdi = v->arch.user_regs.rdi;
> + ctxt.rsp = v->arch.user_regs.rsp;
> + ctxt.rip = v->arch.user_regs.rip;
> + ctxt.rflags = v->arch.user_regs.rflags;
> + ctxt.r8 = v->arch.user_regs.r8;
> + ctxt.r9 = v->arch.user_regs.r9;
> + ctxt.r10 = v->arch.user_regs.r10;
> + ctxt.r11 = v->arch.user_regs.r11;
> + ctxt.r12 = v->arch.user_regs.r12;
> + ctxt.r13 = v->arch.user_regs.r13;
> + ctxt.r14 = v->arch.user_regs.r14;
> + ctxt.r15 = v->arch.user_regs.r15;
> + ctxt.dr0 = v->arch.debugreg[0];
> + ctxt.dr1 = v->arch.debugreg[1];
> + ctxt.dr2 = v->arch.debugreg[2];
> + ctxt.dr3 = v->arch.debugreg[3];
> + ctxt.dr6 = v->arch.debugreg[6];
> + ctxt.dr7 = v->arch.debugreg[7];
> +
> + return hvm_save_entry(CPU, v->vcpu_id, h, &ctxt);
> +}
> +
> static int hvm_save_cpu_ctxt(struct domain *d, hvm_domain_context_t *h)
> {
> struct vcpu *v;
> - struct hvm_hw_cpu ctxt;
> - struct segment_register seg;
> + int err = 0;
>
> for_each_vcpu ( d, v )
> {
> - /* We don't need to save state for a vcpu that is down; the restore
> - * code will leave it down if there is nothing saved. */
> + /*
> + * We don't need to save state for a vcpu that is down; the restore
> + * code will leave it down if there is nothing saved.
> + */
> if ( v->pause_flags & VPF_down )
> continue;
>
> - memset(&ctxt, 0, sizeof(ctxt));
> -
> - /* Architecture-specific vmcs/vmcb bits */
> - hvm_funcs.save_cpu_ctxt(v, &ctxt);
> -
> - ctxt.tsc = hvm_get_guest_tsc_fixed(v, d->arch.hvm_domain.sync_tsc);
> -
> - ctxt.msr_tsc_aux = hvm_msr_tsc_aux(v);
> -
> - hvm_get_segment_register(v, x86_seg_idtr, &seg);
> - ctxt.idtr_limit = seg.limit;
> - ctxt.idtr_base = seg.base;
> -
> - hvm_get_segment_register(v, x86_seg_gdtr, &seg);
> - ctxt.gdtr_limit = seg.limit;
> - ctxt.gdtr_base = seg.base;
> -
> - hvm_get_segment_register(v, x86_seg_cs, &seg);
> - ctxt.cs_sel = seg.sel;
> - ctxt.cs_limit = seg.limit;
> - ctxt.cs_base = seg.base;
> - ctxt.cs_arbytes = seg.attr;
> -
> - hvm_get_segment_register(v, x86_seg_ds, &seg);
> - ctxt.ds_sel = seg.sel;
> - ctxt.ds_limit = seg.limit;
> - ctxt.ds_base = seg.base;
> - ctxt.ds_arbytes = seg.attr;
> -
> - hvm_get_segment_register(v, x86_seg_es, &seg);
> - ctxt.es_sel = seg.sel;
> - ctxt.es_limit = seg.limit;
> - ctxt.es_base = seg.base;
> - ctxt.es_arbytes = seg.attr;
> -
> - hvm_get_segment_register(v, x86_seg_ss, &seg);
> - ctxt.ss_sel = seg.sel;
> - ctxt.ss_limit = seg.limit;
> - ctxt.ss_base = seg.base;
> - ctxt.ss_arbytes = seg.attr;
> -
> - hvm_get_segment_register(v, x86_seg_fs, &seg);
> - ctxt.fs_sel = seg.sel;
> - ctxt.fs_limit = seg.limit;
> - ctxt.fs_base = seg.base;
> - ctxt.fs_arbytes = seg.attr;
> -
> - hvm_get_segment_register(v, x86_seg_gs, &seg);
> - ctxt.gs_sel = seg.sel;
> - ctxt.gs_limit = seg.limit;
> - ctxt.gs_base = seg.base;
> - ctxt.gs_arbytes = seg.attr;
> -
> - hvm_get_segment_register(v, x86_seg_tr, &seg);
> - ctxt.tr_sel = seg.sel;
> - ctxt.tr_limit = seg.limit;
> - ctxt.tr_base = seg.base;
> - ctxt.tr_arbytes = seg.attr;
> -
> - hvm_get_segment_register(v, x86_seg_ldtr, &seg);
> - ctxt.ldtr_sel = seg.sel;
> - ctxt.ldtr_limit = seg.limit;
> - ctxt.ldtr_base = seg.base;
> - ctxt.ldtr_arbytes = seg.attr;
> -
> - if ( v->fpu_initialised )
> - {
> - memcpy(ctxt.fpu_regs, v->arch.fpu_ctxt, sizeof(ctxt.fpu_regs));
> - ctxt.flags = XEN_X86_FPU_INITIALISED;
> - }
> -
> - ctxt.rax = v->arch.user_regs.rax;
> - ctxt.rbx = v->arch.user_regs.rbx;
> - ctxt.rcx = v->arch.user_regs.rcx;
> - ctxt.rdx = v->arch.user_regs.rdx;
> - ctxt.rbp = v->arch.user_regs.rbp;
> - ctxt.rsi = v->arch.user_regs.rsi;
> - ctxt.rdi = v->arch.user_regs.rdi;
> - ctxt.rsp = v->arch.user_regs.rsp;
> - ctxt.rip = v->arch.user_regs.rip;
> - ctxt.rflags = v->arch.user_regs.rflags;
> - ctxt.r8 = v->arch.user_regs.r8;
> - ctxt.r9 = v->arch.user_regs.r9;
> - ctxt.r10 = v->arch.user_regs.r10;
> - ctxt.r11 = v->arch.user_regs.r11;
> - ctxt.r12 = v->arch.user_regs.r12;
> - ctxt.r13 = v->arch.user_regs.r13;
> - ctxt.r14 = v->arch.user_regs.r14;
> - ctxt.r15 = v->arch.user_regs.r15;
> - ctxt.dr0 = v->arch.debugreg[0];
> - ctxt.dr1 = v->arch.debugreg[1];
> - ctxt.dr2 = v->arch.debugreg[2];
> - ctxt.dr3 = v->arch.debugreg[3];
> - ctxt.dr6 = v->arch.debugreg[6];
> - ctxt.dr7 = v->arch.debugreg[7];
> -
> - if ( hvm_save_entry(CPU, v->vcpu_id, h, &ctxt) != 0 )
> - return 1;
> + err = hvm_save_cpu_ctxt_one(v, h);
> + if ( err )
> + break;
> }
> - return 0;
> + return err;
> }
>
> /* Return a string indicating the error, or NULL for valid. */
> --
> 2.7.4
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH v12 07/11] x86/hvm: Introduce viridian_save_vcpu_ctxt_one() func
2018-07-16 14:55 ` [PATCH v12 07/11] x86/hvm: Introduce viridian_save_vcpu_ctxt_one() func Alexandru Isaila
@ 2018-07-16 15:34 ` Paul Durrant
0 siblings, 0 replies; 17+ messages in thread
From: Paul Durrant @ 2018-07-16 15:34 UTC (permalink / raw)
To: 'Alexandru Isaila', xen-devel@lists.xen.org
Cc: Ian Jackson, Wei Liu, jbeulich@suse.com, Andrew Cooper
> -----Original Message-----
> From: Alexandru Isaila [mailto:aisaila@bitdefender.com]
> Sent: 16 July 2018 15:56
> To: xen-devel@lists.xen.org
> Cc: Ian Jackson <Ian.Jackson@citrix.com>; Wei Liu <wei.liu2@citrix.com>;
> jbeulich@suse.com; Andrew Cooper <Andrew.Cooper3@citrix.com>; Paul
> Durrant <Paul.Durrant@citrix.com>; Alexandru Isaila
> <aisaila@bitdefender.com>
> Subject: [PATCH v12 07/11] x86/hvm: Introduce
> viridian_save_vcpu_ctxt_one() func
>
> This is used to save data from a single instance.
>
> Signed-off-by: Alexandru Isaila <aisaila@bitdefender.com>
> ---
> xen/arch/x86/hvm/viridian.c | 29 +++++++++++++++++++----------
> 1 file changed, 19 insertions(+), 10 deletions(-)
>
> diff --git a/xen/arch/x86/hvm/viridian.c b/xen/arch/x86/hvm/viridian.c
> index 694eae6..f3430bb 100644
> --- a/xen/arch/x86/hvm/viridian.c
> +++ b/xen/arch/x86/hvm/viridian.c
> @@ -1026,24 +1026,33 @@ static int viridian_load_domain_ctxt(struct
> domain *d, hvm_domain_context_t *h)
> HVM_REGISTER_SAVE_RESTORE(VIRIDIAN_DOMAIN,
> viridian_save_domain_ctxt,
> viridian_load_domain_ctxt, 1, HVMSR_PER_DOM);
>
> -static int viridian_save_vcpu_ctxt(struct domain *d,
> hvm_domain_context_t *h)
> +static int viridian_save_vcpu_ctxt_one(struct vcpu *v,
> hvm_domain_context_t *h)
> {
> - struct vcpu *v;
> + struct hvm_viridian_vcpu_context ctxt = {};
>
> - if ( !is_viridian_domain(d) )
> + if ( !is_viridian_domain(v->domain) )
> return 0;
>
> + ctxt.vp_assist_msr = v->arch.hvm_vcpu.viridian.vp_assist.msr.raw;
> + ctxt.vp_assist_pending = v->arch.hvm_vcpu.viridian.vp_assist.pending;
> +
> + return hvm_save_entry(VIRIDIAN_VCPU, v->vcpu_id, h, &ctxt);
> +}
> +
> +static int viridian_save_vcpu_ctxt(struct domain *d,
> hvm_domain_context_t *h)
> +{
> + struct vcpu *v;
> + int err = 0;
> +
> for_each_vcpu( d, v ) {
Sorry for not noticing earlier but, whilst you're not actually changing this line, it needs a style fix. So if you re-spin this then it would be good to correct it.
> - struct hvm_viridian_vcpu_context ctxt = {
> - .vp_assist_msr = v->arch.hvm_vcpu.viridian.vp_assist.msr.raw,
> - .vp_assist_pending = v->arch.hvm_vcpu.viridian.vp_assist.pending,
> - };
>
> - if ( hvm_save_entry(VIRIDIAN_VCPU, v->vcpu_id, h, &ctxt) != 0 )
> - return 1;
> + err = viridian_save_vcpu_ctxt_one(v, h);
> +
I think you can lose the blank line here, but that's a small nit so...
Reviewed-by: Paul Durrant <paul.durrant@citrix.com>
> + if ( err )
> + break;
> }
>
> - return 0;
> + return err;
> }
>
> static int viridian_load_vcpu_ctxt(struct domain *d, hvm_domain_context_t
> *h)
> --
> 2.7.4
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH v12 03/11] x86/hvm: Introduce hvm_save_cpu_ctxt_one func
2018-07-16 15:29 ` Paul Durrant
@ 2018-07-17 12:25 ` Isaila Alexandru
2018-07-17 14:03 ` Jan Beulich
0 siblings, 1 reply; 17+ messages in thread
From: Isaila Alexandru @ 2018-07-17 12:25 UTC (permalink / raw)
To: Paul Durrant, xen-devel@lists.xen.org
Cc: Ian Jackson, Wei Liu, jbeulich@suse.com, Andrew Cooper
On Lu, 2018-07-16 at 15:29 +0000, Paul Durrant wrote:
> >
> > -----Original Message-----
> > From: Alexandru Isaila [mailto:aisaila@bitdefender.com]
> > Sent: 16 July 2018 15:55
> > To: xen-devel@lists.xen.org
> > Cc: Ian Jackson <Ian.Jackson@citrix.com>; Wei Liu <wei.liu2@citrix.
> > com>;
> > jbeulich@suse.com; Andrew Cooper <Andrew.Cooper3@citrix.com>; Paul
> > Durrant <Paul.Durrant@citrix.com>; Alexandru Isaila
> > <aisaila@bitdefender.com>
> > Subject: [PATCH v12 03/11] x86/hvm: Introduce hvm_save_cpu_ctxt_one
> > func
> >
> > This is used to save data from a single instance.
> >
> > Signed-off-by: Alexandru Isaila <aisaila@bitdefender.com>
> >
> > ---
> > Changes since V11:
> > - hvm_save_cpu_ctxt() now returns err from
> > hvm_save_cpu_ctxt_one().
> > ---
> > xen/arch/x86/hvm/hvm.c | 216 ++++++++++++++++++++++++++-----------
> > -
> > -----------
> > 1 file changed, 113 insertions(+), 103 deletions(-)
> >
> > diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
> > index dd88751..e20a25c 100644
> > --- a/xen/arch/x86/hvm/hvm.c
> > +++ b/xen/arch/x86/hvm/hvm.c
> > @@ -787,119 +787,129 @@ static int hvm_load_tsc_adjust(struct
> > domain *d,
> > hvm_domain_context_t *h)
> > HVM_REGISTER_SAVE_RESTORE(TSC_ADJUST, hvm_save_tsc_adjust,
> > hvm_load_tsc_adjust, 1, HVMSR_PER_VCPU);
> >
> > +static int hvm_save_cpu_ctxt_one(struct vcpu *v,
> > hvm_domain_context_t
> > *h)
> > +{
> > + struct segment_register seg;
> > + struct hvm_hw_cpu ctxt;
> > +
> > + memset(&ctxt, 0, sizeof(ctxt));
> Why not use an = {} initializer instead of the memset here like
> elsewhere?
>
> Paul
I wanted to make less change as possible and I only added a initializer
where there was none.
Alex
> >
> > +
> > + /* Architecture-specific vmcs/vmcb bits */
> > + hvm_funcs.save_cpu_ctxt(v, &ctxt);
> > +
> > + ctxt.tsc = hvm_get_guest_tsc_fixed(v, v->domain-
> > >
> > > arch.hvm_domain.sync_tsc);
> > +
> > + ctxt.msr_tsc_aux = hvm_msr_tsc_aux(v);
> > +
> > + hvm_get_segment_register(v, x86_seg_idtr, &seg);
> > + ctxt.idtr_limit = seg.limit;
> > + ctxt.idtr_base = seg.base;
> > +
> > + hvm_get_segment_register(v, x86_seg_gdtr, &seg);
> > + ctxt.gdtr_limit = seg.limit;
> > + ctxt.gdtr_base = seg.base;
> > +
> > + hvm_get_segment_register(v, x86_seg_cs, &seg);
> > + ctxt.cs_sel = seg.sel;
> > + ctxt.cs_limit = seg.limit;
> > + ctxt.cs_base = seg.base;
> > + ctxt.cs_arbytes = seg.attr;
> > +
> > + hvm_get_segment_register(v, x86_seg_ds, &seg);
> > + ctxt.ds_sel = seg.sel;
> > + ctxt.ds_limit = seg.limit;
> > + ctxt.ds_base = seg.base;
> > + ctxt.ds_arbytes = seg.attr;
> > +
> > + hvm_get_segment_register(v, x86_seg_es, &seg);
> > + ctxt.es_sel = seg.sel;
> > + ctxt.es_limit = seg.limit;
> > + ctxt.es_base = seg.base;
> > + ctxt.es_arbytes = seg.attr;
> > +
> > + hvm_get_segment_register(v, x86_seg_ss, &seg);
> > + ctxt.ss_sel = seg.sel;
> > + ctxt.ss_limit = seg.limit;
> > + ctxt.ss_base = seg.base;
> > + ctxt.ss_arbytes = seg.attr;
> > +
> > + hvm_get_segment_register(v, x86_seg_fs, &seg);
> > + ctxt.fs_sel = seg.sel;
> > + ctxt.fs_limit = seg.limit;
> > + ctxt.fs_base = seg.base;
> > + ctxt.fs_arbytes = seg.attr;
> > +
> > + hvm_get_segment_register(v, x86_seg_gs, &seg);
> > + ctxt.gs_sel = seg.sel;
> > + ctxt.gs_limit = seg.limit;
> > + ctxt.gs_base = seg.base;
> > + ctxt.gs_arbytes = seg.attr;
> > +
> > + hvm_get_segment_register(v, x86_seg_tr, &seg);
> > + ctxt.tr_sel = seg.sel;
> > + ctxt.tr_limit = seg.limit;
> > + ctxt.tr_base = seg.base;
> > + ctxt.tr_arbytes = seg.attr;
> > +
> > + hvm_get_segment_register(v, x86_seg_ldtr, &seg);
> > + ctxt.ldtr_sel = seg.sel;
> > + ctxt.ldtr_limit = seg.limit;
> > + ctxt.ldtr_base = seg.base;
> > + ctxt.ldtr_arbytes = seg.attr;
> > +
> > + if ( v->fpu_initialised )
> > + {
> > + memcpy(ctxt.fpu_regs, v->arch.fpu_ctxt,
> > sizeof(ctxt.fpu_regs));
> > + ctxt.flags = XEN_X86_FPU_INITIALISED;
> > + }
> > +
> > + ctxt.rax = v->arch.user_regs.rax;
> > + ctxt.rbx = v->arch.user_regs.rbx;
> > + ctxt.rcx = v->arch.user_regs.rcx;
> > + ctxt.rdx = v->arch.user_regs.rdx;
> > + ctxt.rbp = v->arch.user_regs.rbp;
> > + ctxt.rsi = v->arch.user_regs.rsi;
> > + ctxt.rdi = v->arch.user_regs.rdi;
> > + ctxt.rsp = v->arch.user_regs.rsp;
> > + ctxt.rip = v->arch.user_regs.rip;
> > + ctxt.rflags = v->arch.user_regs.rflags;
> > + ctxt.r8 = v->arch.user_regs.r8;
> > + ctxt.r9 = v->arch.user_regs.r9;
> > + ctxt.r10 = v->arch.user_regs.r10;
> > + ctxt.r11 = v->arch.user_regs.r11;
> > + ctxt.r12 = v->arch.user_regs.r12;
> > + ctxt.r13 = v->arch.user_regs.r13;
> > + ctxt.r14 = v->arch.user_regs.r14;
> > + ctxt.r15 = v->arch.user_regs.r15;
> > + ctxt.dr0 = v->arch.debugreg[0];
> > + ctxt.dr1 = v->arch.debugreg[1];
> > + ctxt.dr2 = v->arch.debugreg[2];
> > + ctxt.dr3 = v->arch.debugreg[3];
> > + ctxt.dr6 = v->arch.debugreg[6];
> > + ctxt.dr7 = v->arch.debugreg[7];
> > +
> > + return hvm_save_entry(CPU, v->vcpu_id, h, &ctxt);
> > +}
> > +
> > static int hvm_save_cpu_ctxt(struct domain *d,
> > hvm_domain_context_t *h)
> > {
> > struct vcpu *v;
> > - struct hvm_hw_cpu ctxt;
> > - struct segment_register seg;
> > + int err = 0;
> >
> > for_each_vcpu ( d, v )
> > {
> > - /* We don't need to save state for a vcpu that is down;
> > the restore
> > - * code will leave it down if there is nothing saved. */
> > + /*
> > + * We don't need to save state for a vcpu that is down;
> > the restore
> > + * code will leave it down if there is nothing saved.
> > + */
> > if ( v->pause_flags & VPF_down )
> > continue;
> >
> > - memset(&ctxt, 0, sizeof(ctxt));
> > -
> > - /* Architecture-specific vmcs/vmcb bits */
> > - hvm_funcs.save_cpu_ctxt(v, &ctxt);
> > -
> > - ctxt.tsc = hvm_get_guest_tsc_fixed(v, d-
> > >arch.hvm_domain.sync_tsc);
> > -
> > - ctxt.msr_tsc_aux = hvm_msr_tsc_aux(v);
> > -
> > - hvm_get_segment_register(v, x86_seg_idtr, &seg);
> > - ctxt.idtr_limit = seg.limit;
> > - ctxt.idtr_base = seg.base;
> > -
> > - hvm_get_segment_register(v, x86_seg_gdtr, &seg);
> > - ctxt.gdtr_limit = seg.limit;
> > - ctxt.gdtr_base = seg.base;
> > -
> > - hvm_get_segment_register(v, x86_seg_cs, &seg);
> > - ctxt.cs_sel = seg.sel;
> > - ctxt.cs_limit = seg.limit;
> > - ctxt.cs_base = seg.base;
> > - ctxt.cs_arbytes = seg.attr;
> > -
> > - hvm_get_segment_register(v, x86_seg_ds, &seg);
> > - ctxt.ds_sel = seg.sel;
> > - ctxt.ds_limit = seg.limit;
> > - ctxt.ds_base = seg.base;
> > - ctxt.ds_arbytes = seg.attr;
> > -
> > - hvm_get_segment_register(v, x86_seg_es, &seg);
> > - ctxt.es_sel = seg.sel;
> > - ctxt.es_limit = seg.limit;
> > - ctxt.es_base = seg.base;
> > - ctxt.es_arbytes = seg.attr;
> > -
> > - hvm_get_segment_register(v, x86_seg_ss, &seg);
> > - ctxt.ss_sel = seg.sel;
> > - ctxt.ss_limit = seg.limit;
> > - ctxt.ss_base = seg.base;
> > - ctxt.ss_arbytes = seg.attr;
> > -
> > - hvm_get_segment_register(v, x86_seg_fs, &seg);
> > - ctxt.fs_sel = seg.sel;
> > - ctxt.fs_limit = seg.limit;
> > - ctxt.fs_base = seg.base;
> > - ctxt.fs_arbytes = seg.attr;
> > -
> > - hvm_get_segment_register(v, x86_seg_gs, &seg);
> > - ctxt.gs_sel = seg.sel;
> > - ctxt.gs_limit = seg.limit;
> > - ctxt.gs_base = seg.base;
> > - ctxt.gs_arbytes = seg.attr;
> > -
> > - hvm_get_segment_register(v, x86_seg_tr, &seg);
> > - ctxt.tr_sel = seg.sel;
> > - ctxt.tr_limit = seg.limit;
> > - ctxt.tr_base = seg.base;
> > - ctxt.tr_arbytes = seg.attr;
> > -
> > - hvm_get_segment_register(v, x86_seg_ldtr, &seg);
> > - ctxt.ldtr_sel = seg.sel;
> > - ctxt.ldtr_limit = seg.limit;
> > - ctxt.ldtr_base = seg.base;
> > - ctxt.ldtr_arbytes = seg.attr;
> > -
> > - if ( v->fpu_initialised )
> > - {
> > - memcpy(ctxt.fpu_regs, v->arch.fpu_ctxt,
> > sizeof(ctxt.fpu_regs));
> > - ctxt.flags = XEN_X86_FPU_INITIALISED;
> > - }
> > -
> > - ctxt.rax = v->arch.user_regs.rax;
> > - ctxt.rbx = v->arch.user_regs.rbx;
> > - ctxt.rcx = v->arch.user_regs.rcx;
> > - ctxt.rdx = v->arch.user_regs.rdx;
> > - ctxt.rbp = v->arch.user_regs.rbp;
> > - ctxt.rsi = v->arch.user_regs.rsi;
> > - ctxt.rdi = v->arch.user_regs.rdi;
> > - ctxt.rsp = v->arch.user_regs.rsp;
> > - ctxt.rip = v->arch.user_regs.rip;
> > - ctxt.rflags = v->arch.user_regs.rflags;
> > - ctxt.r8 = v->arch.user_regs.r8;
> > - ctxt.r9 = v->arch.user_regs.r9;
> > - ctxt.r10 = v->arch.user_regs.r10;
> > - ctxt.r11 = v->arch.user_regs.r11;
> > - ctxt.r12 = v->arch.user_regs.r12;
> > - ctxt.r13 = v->arch.user_regs.r13;
> > - ctxt.r14 = v->arch.user_regs.r14;
> > - ctxt.r15 = v->arch.user_regs.r15;
> > - ctxt.dr0 = v->arch.debugreg[0];
> > - ctxt.dr1 = v->arch.debugreg[1];
> > - ctxt.dr2 = v->arch.debugreg[2];
> > - ctxt.dr3 = v->arch.debugreg[3];
> > - ctxt.dr6 = v->arch.debugreg[6];
> > - ctxt.dr7 = v->arch.debugreg[7];
> > -
> > - if ( hvm_save_entry(CPU, v->vcpu_id, h, &ctxt) != 0 )
> > - return 1;
> > + err = hvm_save_cpu_ctxt_one(v, h);
> > + if ( err )
> > + break;
> > }
> > - return 0;
> > + return err;
> > }
> >
> > /* Return a string indicating the error, or NULL for valid. */
> > --
> > 2.7.4
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH v12 03/11] x86/hvm: Introduce hvm_save_cpu_ctxt_one func
2018-07-17 12:25 ` Isaila Alexandru
@ 2018-07-17 14:03 ` Jan Beulich
2018-07-17 14:57 ` Isaila Alexandru
0 siblings, 1 reply; 17+ messages in thread
From: Jan Beulich @ 2018-07-17 14:03 UTC (permalink / raw)
To: aisaila; +Cc: Andrew Cooper, xen-devel, Paul Durrant, Wei Liu, Ian Jackson
>>> On 17.07.18 at 14:25, <aisaila@bitdefender.com> wrote:
> On Lu, 2018-07-16 at 15:29 +0000, Paul Durrant wrote:
>> > From: Alexandru Isaila [mailto:aisaila@bitdefender.com]
>> > Sent: 16 July 2018 15:55
>> > --- a/xen/arch/x86/hvm/hvm.c
>> > +++ b/xen/arch/x86/hvm/hvm.c
>> > @@ -787,119 +787,129 @@ static int hvm_load_tsc_adjust(struct
>> > domain *d,
>> > hvm_domain_context_t *h)
>> > HVM_REGISTER_SAVE_RESTORE(TSC_ADJUST, hvm_save_tsc_adjust,
>> > hvm_load_tsc_adjust, 1, HVMSR_PER_VCPU);
>> >
>> > +static int hvm_save_cpu_ctxt_one(struct vcpu *v,
>> > hvm_domain_context_t
>> > *h)
>> > +{
>> > + struct segment_register seg;
>> > + struct hvm_hw_cpu ctxt;
>> > +
>> > + memset(&ctxt, 0, sizeof(ctxt));
>> Why not use an = {} initializer instead of the memset here like
>> elsewhere?
>
> I wanted to make less change as possible and I only added a initializer
> where there was none.
Trying to limit patch impact is certainly appreciated, but please take a
look at your patch to see whether this would really have made much
of a difference.
Jan
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH v12 03/11] x86/hvm: Introduce hvm_save_cpu_ctxt_one func
2018-07-17 14:03 ` Jan Beulich
@ 2018-07-17 14:57 ` Isaila Alexandru
0 siblings, 0 replies; 17+ messages in thread
From: Isaila Alexandru @ 2018-07-17 14:57 UTC (permalink / raw)
To: Jan Beulich; +Cc: Andrew Cooper, xen-devel, Paul Durrant, Wei Liu, Ian Jackson
On Ma, 2018-07-17 at 08:03 -0600, Jan Beulich wrote:
> >
> > >
> > > >
> > > > On 17.07.18 at 14:25, <aisaila@bitdefender.com> wrote:
> > On Lu, 2018-07-16 at 15:29 +0000, Paul Durrant wrote:
> > >
> > > >
> > > > From: Alexandru Isaila [mailto:aisaila@bitdefender.com]
> > > > Sent: 16 July 2018 15:55
> > > > --- a/xen/arch/x86/hvm/hvm.c
> > > > +++ b/xen/arch/x86/hvm/hvm.c
> > > > @@ -787,119 +787,129 @@ static int hvm_load_tsc_adjust(struct
> > > > domain *d,
> > > > hvm_domain_context_t *h)
> > > > HVM_REGISTER_SAVE_RESTORE(TSC_ADJUST, hvm_save_tsc_adjust,
> > > > hvm_load_tsc_adjust, 1,
> > > > HVMSR_PER_VCPU);
> > > >
> > > > +static int hvm_save_cpu_ctxt_one(struct vcpu *v,
> > > > hvm_domain_context_t
> > > > *h)
> > > > +{
> > > > + struct segment_register seg;
> > > > + struct hvm_hw_cpu ctxt;
> > > > +
> > > > + memset(&ctxt, 0, sizeof(ctxt));
> > > Why not use an = {} initializer instead of the memset here like
> > > elsewhere?
> > I wanted to make less change as possible and I only added a
> > initializer
> > where there was none.
> Trying to limit patch impact is certainly appreciated, but please
> take a
> look at your patch to see whether this would really have made much
> of a difference.
>
I will change this in the next version but I will wait for more
comments on the rest of the patches.
Regards,
Alex
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
^ permalink raw reply [flat|nested] 17+ messages in thread
end of thread, other threads:[~2018-07-17 14:57 UTC | newest]
Thread overview: 17+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2018-07-16 14:55 [PATCH v12 00/11] x86/domctl: Save info for one vcpu instance Alexandru Isaila
2018-07-16 14:55 ` [PATCH v12 01/11] x86/cpu: Introduce vmce_save_vcpu_ctxt_one() func Alexandru Isaila
2018-07-16 14:55 ` [PATCH v12 02/11] x86/hvm: Introduce hvm_save_tsc_adjust_one() func Alexandru Isaila
2018-07-16 14:55 ` [PATCH v12 03/11] x86/hvm: Introduce hvm_save_cpu_ctxt_one func Alexandru Isaila
2018-07-16 15:29 ` Paul Durrant
2018-07-17 12:25 ` Isaila Alexandru
2018-07-17 14:03 ` Jan Beulich
2018-07-17 14:57 ` Isaila Alexandru
2018-07-16 14:55 ` [PATCH v12 04/11] x86/hvm: Introduce hvm_save_cpu_xsave_states_one Alexandru Isaila
2018-07-16 14:55 ` [PATCH v12 05/11] x86/hvm: Introduce hvm_save_cpu_msrs_one func Alexandru Isaila
2018-07-16 14:55 ` [PATCH v12 06/11] x86/hvm: Introduce hvm_save_mtrr_msr_one func Alexandru Isaila
2018-07-16 14:55 ` [PATCH v12 07/11] x86/hvm: Introduce viridian_save_vcpu_ctxt_one() func Alexandru Isaila
2018-07-16 15:34 ` Paul Durrant
2018-07-16 14:55 ` [PATCH v12 08/11] x86/hvm: Add handler for save_one funcs Alexandru Isaila
2018-07-16 14:55 ` [PATCH v12 09/11] x86/domctl: Don't pause the whole domain if only getting vcpu state Alexandru Isaila
2018-07-16 14:55 ` [PATCH v12 10/11] x86/hvm: Remove redundant save functions Alexandru Isaila
2018-07-16 14:55 ` [PATCH v12 11/11] x86/hvm: Remove save_one handler Alexandru Isaila
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).