* [PATCH v15 00/14] x86/domctl: Save info for one vcpu instance
@ 2018-08-03 13:53 Alexandru Isaila
2018-08-03 13:53 ` [PATCH v15 01/14] x86/cpu: Introduce vmce_save_vcpu_ctxt_one() func Alexandru Isaila
` (14 more replies)
0 siblings, 15 replies; 26+ messages in thread
From: Alexandru Isaila @ 2018-08-03 13:53 UTC (permalink / raw)
To: xen-devel; +Cc: wei.liu2, paul.durrant, ian.jackson, jbeulich, andrew.cooper3
Hi all,
This patch series addresses the ideea of saving data from a single vcpu instance.
First it starts by adding *save_one functions, then it introduces a handler for the
new save_one* funcs and makes use of it in the hvm_save and hvm_save_one funcs.
The final patches are used for clean up and change the hvm_save_one() func while
changing domain_pause to vcpu_pause.
Cheers,
Alexandru Isaila (14):
x86/cpu: Introduce vmce_save_vcpu_ctxt_one() func
x86/hvm: Introduce hvm_save_tsc_adjust_one() func
x86/hvm: Introduce hvm_save_cpu_ctxt_one func
x86/hvm: Introduce hvm_save_cpu_xsave_states_one
x86/hvm: Introduce hvm_save_cpu_msrs_one func
x86/hvm: Introduce hvm_save_mtrr_msr_one func
x86/hvm: Introduce viridian_save_vcpu_ctxt_one()
x86/hvm: Introduce lapic_save_hidden_one
x86/hvm: Introduce lapic_save_regs_one func
x86/hvm: Add handler for save_one funcs
x86/domctl: Use hvm_save_vcpu_handler
x86/hvm: Drop the use of save functions
x86/hvm: Remove redundant save functions
x86/domctl: Don't pause the whole domain if only
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
^ permalink raw reply [flat|nested] 26+ messages in thread
* [PATCH v15 01/14] x86/cpu: Introduce vmce_save_vcpu_ctxt_one() func
2018-08-03 13:53 [PATCH v15 00/14] x86/domctl: Save info for one vcpu instance Alexandru Isaila
@ 2018-08-03 13:53 ` Alexandru Isaila
2018-08-03 13:53 ` [PATCH v15 02/14] x86/hvm: Introduce hvm_save_tsc_adjust_one() func Alexandru Isaila
` (13 subsequent siblings)
14 siblings, 0 replies; 26+ messages in thread
From: Alexandru Isaila @ 2018-08-03 13:53 UTC (permalink / raw)
To: xen-devel
Cc: wei.liu2, andrew.cooper3, ian.jackson, paul.durrant, jbeulich,
Alexandru Isaila
This is used to save data from a single instance.
Signed-off-by: Alexandru Isaila <aisaila@bitdefender.com>
---
Changes since V11:
- Removed the memset and added init with {}.
---
xen/arch/x86/cpu/mcheck/vmce.c | 21 +++++++++++++--------
1 file changed, 13 insertions(+), 8 deletions(-)
diff --git a/xen/arch/x86/cpu/mcheck/vmce.c b/xen/arch/x86/cpu/mcheck/vmce.c
index e07cd2f..31e553c 100644
--- a/xen/arch/x86/cpu/mcheck/vmce.c
+++ b/xen/arch/x86/cpu/mcheck/vmce.c
@@ -349,6 +349,18 @@ int vmce_wrmsr(uint32_t msr, uint64_t val)
return ret;
}
+static int vmce_save_vcpu_ctxt_one(struct vcpu *v, hvm_domain_context_t *h)
+{
+ struct hvm_vmce_vcpu ctxt = {
+ .caps = v->arch.vmce.mcg_cap,
+ .mci_ctl2_bank0 = v->arch.vmce.bank[0].mci_ctl2,
+ .mci_ctl2_bank1 = v->arch.vmce.bank[1].mci_ctl2,
+ .mcg_ext_ctl = v->arch.vmce.mcg_ext_ctl,
+ };
+
+ return hvm_save_entry(VMCE_VCPU, v->vcpu_id, h, &ctxt);
+}
+
static int vmce_save_vcpu_ctxt(struct domain *d, hvm_domain_context_t *h)
{
struct vcpu *v;
@@ -356,14 +368,7 @@ static int vmce_save_vcpu_ctxt(struct domain *d, hvm_domain_context_t *h)
for_each_vcpu ( d, v )
{
- struct hvm_vmce_vcpu ctxt = {
- .caps = v->arch.vmce.mcg_cap,
- .mci_ctl2_bank0 = v->arch.vmce.bank[0].mci_ctl2,
- .mci_ctl2_bank1 = v->arch.vmce.bank[1].mci_ctl2,
- .mcg_ext_ctl = v->arch.vmce.mcg_ext_ctl,
- };
-
- err = hvm_save_entry(VMCE_VCPU, v->vcpu_id, h, &ctxt);
+ err = vmce_save_vcpu_ctxt_one(v, h);
if ( err )
break;
}
--
2.7.4
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
^ permalink raw reply related [flat|nested] 26+ messages in thread
* [PATCH v15 02/14] x86/hvm: Introduce hvm_save_tsc_adjust_one() func
2018-08-03 13:53 [PATCH v15 00/14] x86/domctl: Save info for one vcpu instance Alexandru Isaila
2018-08-03 13:53 ` [PATCH v15 01/14] x86/cpu: Introduce vmce_save_vcpu_ctxt_one() func Alexandru Isaila
@ 2018-08-03 13:53 ` Alexandru Isaila
2018-08-03 13:53 ` [PATCH v15 03/14] x86/hvm: Introduce hvm_save_cpu_ctxt_one func Alexandru Isaila
` (12 subsequent siblings)
14 siblings, 0 replies; 26+ messages in thread
From: Alexandru Isaila @ 2018-08-03 13:53 UTC (permalink / raw)
To: xen-devel
Cc: wei.liu2, andrew.cooper3, ian.jackson, paul.durrant, jbeulich,
Alexandru Isaila
This is used to save data from a single instance.
Signed-off-by: Alexandru Isaila <aisaila@bitdefender.com>
---
Changes since V13:
- Moved tsc_adjust to the initializer.
---
xen/arch/x86/hvm/hvm.c | 13 ++++++++++---
1 file changed, 10 insertions(+), 3 deletions(-)
diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 93092d2..d90da9a 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -740,16 +740,23 @@ void hvm_domain_destroy(struct domain *d)
destroy_vpci_mmcfg(d);
}
+static int hvm_save_tsc_adjust_one(struct vcpu *v, hvm_domain_context_t *h)
+{
+ struct hvm_tsc_adjust ctxt = {
+ .tsc_adjust = v->arch.hvm_vcpu.msr_tsc_adjust,
+ };
+
+ return hvm_save_entry(TSC_ADJUST, v->vcpu_id, h, &ctxt);
+}
+
static int hvm_save_tsc_adjust(struct domain *d, hvm_domain_context_t *h)
{
struct vcpu *v;
- struct hvm_tsc_adjust ctxt;
int err = 0;
for_each_vcpu ( d, v )
{
- ctxt.tsc_adjust = v->arch.hvm_vcpu.msr_tsc_adjust;
- err = hvm_save_entry(TSC_ADJUST, v->vcpu_id, h, &ctxt);
+ err = hvm_save_tsc_adjust_one(v, h);
if ( err )
break;
}
--
2.7.4
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
^ permalink raw reply related [flat|nested] 26+ messages in thread
* [PATCH v15 03/14] x86/hvm: Introduce hvm_save_cpu_ctxt_one func
2018-08-03 13:53 [PATCH v15 00/14] x86/domctl: Save info for one vcpu instance Alexandru Isaila
2018-08-03 13:53 ` [PATCH v15 01/14] x86/cpu: Introduce vmce_save_vcpu_ctxt_one() func Alexandru Isaila
2018-08-03 13:53 ` [PATCH v15 02/14] x86/hvm: Introduce hvm_save_tsc_adjust_one() func Alexandru Isaila
@ 2018-08-03 13:53 ` Alexandru Isaila
2018-08-03 13:53 ` [PATCH v15 04/14] x86/hvm: Introduce hvm_save_cpu_xsave_states_one Alexandru Isaila
` (11 subsequent siblings)
14 siblings, 0 replies; 26+ messages in thread
From: Alexandru Isaila @ 2018-08-03 13:53 UTC (permalink / raw)
To: xen-devel
Cc: wei.liu2, andrew.cooper3, ian.jackson, paul.durrant, jbeulich,
Alexandru Isaila
This is used to save data from a single instance.
Signed-off-by: Alexandru Isaila <aisaila@bitdefender.com>
---
Changes since V14:
- Move all free fields to the initializer
- Add blank line to before the return
- Move v->pause_flags check to the save_one function.
---
xen/arch/x86/hvm/hvm.c | 219 +++++++++++++++++++++++++------------------------
1 file changed, 113 insertions(+), 106 deletions(-)
diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index d90da9a..333c342 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -787,119 +787,126 @@ static int hvm_load_tsc_adjust(struct domain *d, hvm_domain_context_t *h)
HVM_REGISTER_SAVE_RESTORE(TSC_ADJUST, hvm_save_tsc_adjust,
hvm_load_tsc_adjust, 1, HVMSR_PER_VCPU);
-static int hvm_save_cpu_ctxt(struct domain *d, hvm_domain_context_t *h)
+static int hvm_save_cpu_ctxt_one(struct vcpu *v, hvm_domain_context_t *h)
{
- struct vcpu *v;
- struct hvm_hw_cpu ctxt;
struct segment_register seg;
+ struct hvm_hw_cpu ctxt = {
+ .tsc = hvm_get_guest_tsc_fixed(v, v->domain->arch.hvm_domain.sync_tsc),
+ .msr_tsc_aux = hvm_msr_tsc_aux(v),
+ .rax = v->arch.user_regs.rax,
+ .rbx = v->arch.user_regs.rbx,
+ .rcx = v->arch.user_regs.rcx,
+ .rdx = v->arch.user_regs.rdx,
+ .rbp = v->arch.user_regs.rbp,
+ .rsi = v->arch.user_regs.rsi,
+ .rdi = v->arch.user_regs.rdi,
+ .rsp = v->arch.user_regs.rsp,
+ .rip = v->arch.user_regs.rip,
+ .rflags = v->arch.user_regs.rflags,
+ .r8 = v->arch.user_regs.r8,
+ .r9 = v->arch.user_regs.r9,
+ .r10 = v->arch.user_regs.r10,
+ .r11 = v->arch.user_regs.r11,
+ .r12 = v->arch.user_regs.r12,
+ .r13 = v->arch.user_regs.r13,
+ .r14 = v->arch.user_regs.r14,
+ .r15 = v->arch.user_regs.r15,
+ .dr0 = v->arch.debugreg[0],
+ .dr1 = v->arch.debugreg[1],
+ .dr2 = v->arch.debugreg[2],
+ .dr3 = v->arch.debugreg[3],
+ .dr6 = v->arch.debugreg[6],
+ .dr7 = v->arch.debugreg[7],
+ };
- for_each_vcpu ( d, v )
+ /*
+ * We don't need to save state for a vcpu that is down; the restore
+ * code will leave it down if there is nothing saved.
+ */
+ if ( v->pause_flags & VPF_down )
+ return 0;
+
+ /* Architecture-specific vmcs/vmcb bits */
+ hvm_funcs.save_cpu_ctxt(v, &ctxt);
+
+ hvm_get_segment_register(v, x86_seg_idtr, &seg);
+ ctxt.idtr_limit = seg.limit;
+ ctxt.idtr_base = seg.base;
+
+ hvm_get_segment_register(v, x86_seg_gdtr, &seg);
+ ctxt.gdtr_limit = seg.limit;
+ ctxt.gdtr_base = seg.base;
+
+ hvm_get_segment_register(v, x86_seg_cs, &seg);
+ ctxt.cs_sel = seg.sel;
+ ctxt.cs_limit = seg.limit;
+ ctxt.cs_base = seg.base;
+ ctxt.cs_arbytes = seg.attr;
+
+ hvm_get_segment_register(v, x86_seg_ds, &seg);
+ ctxt.ds_sel = seg.sel;
+ ctxt.ds_limit = seg.limit;
+ ctxt.ds_base = seg.base;
+ ctxt.ds_arbytes = seg.attr;
+
+ hvm_get_segment_register(v, x86_seg_es, &seg);
+ ctxt.es_sel = seg.sel;
+ ctxt.es_limit = seg.limit;
+ ctxt.es_base = seg.base;
+ ctxt.es_arbytes = seg.attr;
+
+ hvm_get_segment_register(v, x86_seg_ss, &seg);
+ ctxt.ss_sel = seg.sel;
+ ctxt.ss_limit = seg.limit;
+ ctxt.ss_base = seg.base;
+ ctxt.ss_arbytes = seg.attr;
+
+ hvm_get_segment_register(v, x86_seg_fs, &seg);
+ ctxt.fs_sel = seg.sel;
+ ctxt.fs_limit = seg.limit;
+ ctxt.fs_base = seg.base;
+ ctxt.fs_arbytes = seg.attr;
+
+ hvm_get_segment_register(v, x86_seg_gs, &seg);
+ ctxt.gs_sel = seg.sel;
+ ctxt.gs_limit = seg.limit;
+ ctxt.gs_base = seg.base;
+ ctxt.gs_arbytes = seg.attr;
+
+ hvm_get_segment_register(v, x86_seg_tr, &seg);
+ ctxt.tr_sel = seg.sel;
+ ctxt.tr_limit = seg.limit;
+ ctxt.tr_base = seg.base;
+ ctxt.tr_arbytes = seg.attr;
+
+ hvm_get_segment_register(v, x86_seg_ldtr, &seg);
+ ctxt.ldtr_sel = seg.sel;
+ ctxt.ldtr_limit = seg.limit;
+ ctxt.ldtr_base = seg.base;
+ ctxt.ldtr_arbytes = seg.attr;
+
+ if ( v->fpu_initialised )
{
- /* We don't need to save state for a vcpu that is down; the restore
- * code will leave it down if there is nothing saved. */
- if ( v->pause_flags & VPF_down )
- continue;
+ memcpy(ctxt.fpu_regs, v->arch.fpu_ctxt, sizeof(ctxt.fpu_regs));
+ ctxt.flags = XEN_X86_FPU_INITIALISED;
+ }
- memset(&ctxt, 0, sizeof(ctxt));
-
- /* Architecture-specific vmcs/vmcb bits */
- hvm_funcs.save_cpu_ctxt(v, &ctxt);
-
- ctxt.tsc = hvm_get_guest_tsc_fixed(v, d->arch.hvm_domain.sync_tsc);
-
- ctxt.msr_tsc_aux = hvm_msr_tsc_aux(v);
-
- hvm_get_segment_register(v, x86_seg_idtr, &seg);
- ctxt.idtr_limit = seg.limit;
- ctxt.idtr_base = seg.base;
-
- hvm_get_segment_register(v, x86_seg_gdtr, &seg);
- ctxt.gdtr_limit = seg.limit;
- ctxt.gdtr_base = seg.base;
-
- hvm_get_segment_register(v, x86_seg_cs, &seg);
- ctxt.cs_sel = seg.sel;
- ctxt.cs_limit = seg.limit;
- ctxt.cs_base = seg.base;
- ctxt.cs_arbytes = seg.attr;
-
- hvm_get_segment_register(v, x86_seg_ds, &seg);
- ctxt.ds_sel = seg.sel;
- ctxt.ds_limit = seg.limit;
- ctxt.ds_base = seg.base;
- ctxt.ds_arbytes = seg.attr;
-
- hvm_get_segment_register(v, x86_seg_es, &seg);
- ctxt.es_sel = seg.sel;
- ctxt.es_limit = seg.limit;
- ctxt.es_base = seg.base;
- ctxt.es_arbytes = seg.attr;
-
- hvm_get_segment_register(v, x86_seg_ss, &seg);
- ctxt.ss_sel = seg.sel;
- ctxt.ss_limit = seg.limit;
- ctxt.ss_base = seg.base;
- ctxt.ss_arbytes = seg.attr;
-
- hvm_get_segment_register(v, x86_seg_fs, &seg);
- ctxt.fs_sel = seg.sel;
- ctxt.fs_limit = seg.limit;
- ctxt.fs_base = seg.base;
- ctxt.fs_arbytes = seg.attr;
-
- hvm_get_segment_register(v, x86_seg_gs, &seg);
- ctxt.gs_sel = seg.sel;
- ctxt.gs_limit = seg.limit;
- ctxt.gs_base = seg.base;
- ctxt.gs_arbytes = seg.attr;
-
- hvm_get_segment_register(v, x86_seg_tr, &seg);
- ctxt.tr_sel = seg.sel;
- ctxt.tr_limit = seg.limit;
- ctxt.tr_base = seg.base;
- ctxt.tr_arbytes = seg.attr;
-
- hvm_get_segment_register(v, x86_seg_ldtr, &seg);
- ctxt.ldtr_sel = seg.sel;
- ctxt.ldtr_limit = seg.limit;
- ctxt.ldtr_base = seg.base;
- ctxt.ldtr_arbytes = seg.attr;
-
- if ( v->fpu_initialised )
- {
- memcpy(ctxt.fpu_regs, v->arch.fpu_ctxt, sizeof(ctxt.fpu_regs));
- ctxt.flags = XEN_X86_FPU_INITIALISED;
- }
+ return hvm_save_entry(CPU, v->vcpu_id, h, &ctxt);
+}
+
+static int hvm_save_cpu_ctxt(struct domain *d, hvm_domain_context_t *h)
+{
+ struct vcpu *v;
+ int err = 0;
- ctxt.rax = v->arch.user_regs.rax;
- ctxt.rbx = v->arch.user_regs.rbx;
- ctxt.rcx = v->arch.user_regs.rcx;
- ctxt.rdx = v->arch.user_regs.rdx;
- ctxt.rbp = v->arch.user_regs.rbp;
- ctxt.rsi = v->arch.user_regs.rsi;
- ctxt.rdi = v->arch.user_regs.rdi;
- ctxt.rsp = v->arch.user_regs.rsp;
- ctxt.rip = v->arch.user_regs.rip;
- ctxt.rflags = v->arch.user_regs.rflags;
- ctxt.r8 = v->arch.user_regs.r8;
- ctxt.r9 = v->arch.user_regs.r9;
- ctxt.r10 = v->arch.user_regs.r10;
- ctxt.r11 = v->arch.user_regs.r11;
- ctxt.r12 = v->arch.user_regs.r12;
- ctxt.r13 = v->arch.user_regs.r13;
- ctxt.r14 = v->arch.user_regs.r14;
- ctxt.r15 = v->arch.user_regs.r15;
- ctxt.dr0 = v->arch.debugreg[0];
- ctxt.dr1 = v->arch.debugreg[1];
- ctxt.dr2 = v->arch.debugreg[2];
- ctxt.dr3 = v->arch.debugreg[3];
- ctxt.dr6 = v->arch.debugreg[6];
- ctxt.dr7 = v->arch.debugreg[7];
-
- if ( hvm_save_entry(CPU, v->vcpu_id, h, &ctxt) != 0 )
- return 1;
+ for_each_vcpu ( d, v )
+ {
+ err = hvm_save_cpu_ctxt_one(v, h);
+ if ( err )
+ break;
}
- return 0;
+
+ return err;
}
/* Return a string indicating the error, or NULL for valid. */
--
2.7.4
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
^ permalink raw reply related [flat|nested] 26+ messages in thread
* [PATCH v15 04/14] x86/hvm: Introduce hvm_save_cpu_xsave_states_one
2018-08-03 13:53 [PATCH v15 00/14] x86/domctl: Save info for one vcpu instance Alexandru Isaila
` (2 preceding siblings ...)
2018-08-03 13:53 ` [PATCH v15 03/14] x86/hvm: Introduce hvm_save_cpu_ctxt_one func Alexandru Isaila
@ 2018-08-03 13:53 ` Alexandru Isaila
2018-08-03 13:53 ` [PATCH v15 05/14] x86/hvm: Introduce hvm_save_cpu_msrs_one func Alexandru Isaila
` (10 subsequent siblings)
14 siblings, 0 replies; 26+ messages in thread
From: Alexandru Isaila @ 2018-08-03 13:53 UTC (permalink / raw)
To: xen-devel
Cc: wei.liu2, andrew.cooper3, ian.jackson, paul.durrant, jbeulich,
Alexandru Isaila
This is used to save data from a single instance.
Signed-off-by: Alexandru Isaila <aisaila@bitdefender.com>
---
Changes since V14:
- Remove err init
- Add blank line ahead of return
- Move xsave_enabled() check to the save_one func.
---
xen/arch/x86/hvm/hvm.c | 47 +++++++++++++++++++++++++++++------------------
1 file changed, 29 insertions(+), 18 deletions(-)
diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 333c342..5b0820e 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -1187,35 +1187,46 @@ HVM_REGISTER_SAVE_RESTORE(CPU, hvm_save_cpu_ctxt, hvm_load_cpu_ctxt,
save_area) + \
xstate_ctxt_size(xcr0))
-static int hvm_save_cpu_xsave_states(struct domain *d, hvm_domain_context_t *h)
+static int hvm_save_cpu_xsave_states_one(struct vcpu *v, hvm_domain_context_t *h)
{
- struct vcpu *v;
struct hvm_hw_cpu_xsave *ctxt;
+ unsigned int size = HVM_CPU_XSAVE_SIZE(v->arch.xcr0_accum);
+ int err;
- if ( !cpu_has_xsave )
+ if ( !cpu_has_xsave || !xsave_enabled(v) )
return 0; /* do nothing */
- for_each_vcpu ( d, v )
- {
- unsigned int size = HVM_CPU_XSAVE_SIZE(v->arch.xcr0_accum);
+ err = _hvm_init_entry(h, CPU_XSAVE_CODE, v->vcpu_id, size);
+ if ( err )
+ return err;
- if ( !xsave_enabled(v) )
- continue;
- if ( _hvm_init_entry(h, CPU_XSAVE_CODE, v->vcpu_id, size) )
- return 1;
- ctxt = (struct hvm_hw_cpu_xsave *)&h->data[h->cur];
- h->cur += size;
+ ctxt = (struct hvm_hw_cpu_xsave *)&h->data[h->cur];
+ h->cur += size;
+ ctxt->xfeature_mask = xfeature_mask;
+ ctxt->xcr0 = v->arch.xcr0;
+ ctxt->xcr0_accum = v->arch.xcr0_accum;
- ctxt->xfeature_mask = xfeature_mask;
- ctxt->xcr0 = v->arch.xcr0;
- ctxt->xcr0_accum = v->arch.xcr0_accum;
- expand_xsave_states(v, &ctxt->save_area,
- size - offsetof(typeof(*ctxt), save_area));
- }
+ expand_xsave_states(v, &ctxt->save_area,
+ size - offsetof(typeof(*ctxt), save_area));
return 0;
}
+static int hvm_save_cpu_xsave_states(struct domain *d, hvm_domain_context_t *h)
+{
+ struct vcpu *v;
+ int err = 0;
+
+ for_each_vcpu ( d, v )
+ {
+ err = hvm_save_cpu_xsave_states_one(v, h);
+ if ( err )
+ break;
+ }
+
+ return err;
+}
+
/*
* Structure layout conformity checks, documenting correctness of the cast in
* the invocation of validate_xstate() below.
--
2.7.4
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
^ permalink raw reply related [flat|nested] 26+ messages in thread
* [PATCH v15 05/14] x86/hvm: Introduce hvm_save_cpu_msrs_one func
2018-08-03 13:53 [PATCH v15 00/14] x86/domctl: Save info for one vcpu instance Alexandru Isaila
` (3 preceding siblings ...)
2018-08-03 13:53 ` [PATCH v15 04/14] x86/hvm: Introduce hvm_save_cpu_xsave_states_one Alexandru Isaila
@ 2018-08-03 13:53 ` Alexandru Isaila
2018-08-03 13:53 ` [PATCH v15 06/14] x86/hvm: Introduce hvm_save_mtrr_msr_one func Alexandru Isaila
` (9 subsequent siblings)
14 siblings, 0 replies; 26+ messages in thread
From: Alexandru Isaila @ 2018-08-03 13:53 UTC (permalink / raw)
To: xen-devel
Cc: wei.liu2, andrew.cooper3, ian.jackson, paul.durrant, jbeulich,
Alexandru Isaila
This is used to save data from a single instance.
Signed-off-by: Alexandru Isaila <aisaila@bitdefender.com>
Reviewed-by: Paul Durrant <paul.durrant@citrix.com>
---
Changes since V14:
- Remove err init
- Add blank line ahead of return.
---
xen/arch/x86/hvm/hvm.c | 106 +++++++++++++++++++++++++++----------------------
1 file changed, 59 insertions(+), 47 deletions(-)
diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 5b0820e..7df8744 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -1364,69 +1364,81 @@ static const uint32_t msrs_to_send[] = {
};
static unsigned int __read_mostly msr_count_max = ARRAY_SIZE(msrs_to_send);
-static int hvm_save_cpu_msrs(struct domain *d, hvm_domain_context_t *h)
+static int hvm_save_cpu_msrs_one(struct vcpu *v, hvm_domain_context_t *h)
{
- struct vcpu *v;
+ struct hvm_save_descriptor *desc = _p(&h->data[h->cur]);
+ struct hvm_msr *ctxt;
+ unsigned int i;
+ int err;
- for_each_vcpu ( d, v )
+ err = _hvm_init_entry(h, CPU_MSR_CODE, v->vcpu_id,
+ HVM_CPU_MSR_SIZE(msr_count_max));
+ if ( err )
+ return err;
+ ctxt = (struct hvm_msr *)&h->data[h->cur];
+ ctxt->count = 0;
+
+ for ( i = 0; i < ARRAY_SIZE(msrs_to_send); ++i )
{
- struct hvm_save_descriptor *desc = _p(&h->data[h->cur]);
- struct hvm_msr *ctxt;
- unsigned int i;
+ uint64_t val;
+ int rc = guest_rdmsr(v, msrs_to_send[i], &val);
- if ( _hvm_init_entry(h, CPU_MSR_CODE, v->vcpu_id,
- HVM_CPU_MSR_SIZE(msr_count_max)) )
- return 1;
- ctxt = (struct hvm_msr *)&h->data[h->cur];
- ctxt->count = 0;
+ /*
+ * It is the programmers responsibility to ensure that
+ * msrs_to_send[] contain generally-read/write MSRs.
+ * X86EMUL_EXCEPTION here implies a missing feature, and that the
+ * guest doesn't have access to the MSR.
+ */
+ if ( rc == X86EMUL_EXCEPTION )
+ continue;
- for ( i = 0; i < ARRAY_SIZE(msrs_to_send); ++i )
+ if ( rc != X86EMUL_OKAY )
{
- uint64_t val;
- int rc = guest_rdmsr(v, msrs_to_send[i], &val);
+ ASSERT_UNREACHABLE();
+ return -ENXIO;
+ }
- /*
- * It is the programmers responsibility to ensure that
- * msrs_to_send[] contain generally-read/write MSRs.
- * X86EMUL_EXCEPTION here implies a missing feature, and that the
- * guest doesn't have access to the MSR.
- */
- if ( rc == X86EMUL_EXCEPTION )
- continue;
+ if ( !val )
+ continue; /* Skip empty MSRs. */
- if ( rc != X86EMUL_OKAY )
- {
- ASSERT_UNREACHABLE();
- return -ENXIO;
- }
+ ctxt->msr[ctxt->count].index = msrs_to_send[i];
+ ctxt->msr[ctxt->count++].val = val;
+ }
- if ( !val )
- continue; /* Skip empty MSRs. */
+ if ( hvm_funcs.save_msr )
+ hvm_funcs.save_msr(v, ctxt);
- ctxt->msr[ctxt->count].index = msrs_to_send[i];
- ctxt->msr[ctxt->count++].val = val;
- }
+ ASSERT(ctxt->count <= msr_count_max);
- if ( hvm_funcs.save_msr )
- hvm_funcs.save_msr(v, ctxt);
+ for ( i = 0; i < ctxt->count; ++i )
+ ctxt->msr[i]._rsvd = 0;
- ASSERT(ctxt->count <= msr_count_max);
+ if ( ctxt->count )
+ {
+ /* Rewrite length to indicate how much space we actually used. */
+ desc->length = HVM_CPU_MSR_SIZE(ctxt->count);
+ h->cur += HVM_CPU_MSR_SIZE(ctxt->count);
+ }
+ else
+ /* or rewind and remove the descriptor from the stream. */
+ h->cur -= sizeof(struct hvm_save_descriptor);
- for ( i = 0; i < ctxt->count; ++i )
- ctxt->msr[i]._rsvd = 0;
+ return 0;
+}
- if ( ctxt->count )
- {
- /* Rewrite length to indicate how much space we actually used. */
- desc->length = HVM_CPU_MSR_SIZE(ctxt->count);
- h->cur += HVM_CPU_MSR_SIZE(ctxt->count);
- }
- else
- /* or rewind and remove the descriptor from the stream. */
- h->cur -= sizeof(struct hvm_save_descriptor);
+static int hvm_save_cpu_msrs(struct domain *d, hvm_domain_context_t *h)
+{
+ struct vcpu *v;
+ int err = 0;
+
+ for_each_vcpu ( d, v )
+ {
+ err = hvm_save_cpu_msrs_one(v, h);
+ if ( err )
+ break;
}
- return 0;
+ return err;
}
static int hvm_load_cpu_msrs(struct domain *d, hvm_domain_context_t *h)
--
2.7.4
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
^ permalink raw reply related [flat|nested] 26+ messages in thread
* [PATCH v15 06/14] x86/hvm: Introduce hvm_save_mtrr_msr_one func
2018-08-03 13:53 [PATCH v15 00/14] x86/domctl: Save info for one vcpu instance Alexandru Isaila
` (4 preceding siblings ...)
2018-08-03 13:53 ` [PATCH v15 05/14] x86/hvm: Introduce hvm_save_cpu_msrs_one func Alexandru Isaila
@ 2018-08-03 13:53 ` Alexandru Isaila
2018-08-07 12:00 ` Jan Beulich
2018-08-03 13:53 ` [PATCH v15 07/14] x86/hvm: Introduce viridian_save_vcpu_ctxt_one() func Alexandru Isaila
` (8 subsequent siblings)
14 siblings, 1 reply; 26+ messages in thread
From: Alexandru Isaila @ 2018-08-03 13:53 UTC (permalink / raw)
To: xen-devel
Cc: wei.liu2, andrew.cooper3, ian.jackson, paul.durrant, jbeulich,
Alexandru Isaila
This is used to save data from a single instance.
Signed-off-by: Alexandru Isaila <aisaila@bitdefender.com>
---
Changes since v14:
- Fix style violations
- Use structure fields over cast
- Use memcpy for fixed_ranges.
Note: This patch is based on Roger Pau Monne's series[1]
---
xen/arch/x86/hvm/mtrr.c | 77 +++++++++++++++++++++++++------------------------
1 file changed, 40 insertions(+), 37 deletions(-)
diff --git a/xen/arch/x86/hvm/mtrr.c b/xen/arch/x86/hvm/mtrr.c
index 48facbb..2d5af72 100644
--- a/xen/arch/x86/hvm/mtrr.c
+++ b/xen/arch/x86/hvm/mtrr.c
@@ -718,52 +718,55 @@ int hvm_set_mem_pinned_cacheattr(struct domain *d, uint64_t gfn_start,
return 0;
}
-static int hvm_save_mtrr_msr(struct domain *d, hvm_domain_context_t *h)
+static int hvm_save_mtrr_msr_one(struct vcpu *v, hvm_domain_context_t *h)
{
- struct vcpu *v;
+ const struct mtrr_state *mtrr_state = &v->arch.hvm_vcpu.mtrr;
+ struct hvm_hw_mtrr hw_mtrr = {
+ .msr_mtrr_def_type = mtrr_state->def_type |
+ MASK_INSR(mtrr_state->fixed_enabled,
+ MTRRdefType_FE) |
+ MASK_INSR(mtrr_state->enabled, MTRRdefType_E),
+ .msr_mtrr_cap = mtrr_state->mtrr_cap,
+ };
+ unsigned int i;
- /* save mtrr&pat */
- for_each_vcpu(d, v)
+ if ( MASK_EXTR(hw_mtrr.msr_mtrr_cap, MTRRcap_VCNT) >
+ (ARRAY_SIZE(hw_mtrr.msr_mtrr_var) / 2) )
{
- const struct mtrr_state *mtrr_state = &v->arch.hvm_vcpu.mtrr;
- struct hvm_hw_mtrr hw_mtrr = {
- .msr_mtrr_def_type = mtrr_state->def_type |
- MASK_INSR(mtrr_state->fixed_enabled,
- MTRRdefType_FE) |
- MASK_INSR(mtrr_state->enabled, MTRRdefType_E),
- .msr_mtrr_cap = mtrr_state->mtrr_cap,
- };
- unsigned int i;
+ dprintk(XENLOG_G_ERR,
+ "HVM save: %pv: too many (%lu) variable range MTRRs\n",
+ v, MASK_EXTR(hw_mtrr.msr_mtrr_cap, MTRRcap_VCNT));
+ return -EINVAL;
+ }
- if ( MASK_EXTR(hw_mtrr.msr_mtrr_cap, MTRRcap_VCNT) >
- (ARRAY_SIZE(hw_mtrr.msr_mtrr_var) / 2) )
- {
- dprintk(XENLOG_G_ERR,
- "HVM save: %pv: too many (%lu) variable range MTRRs\n",
- v, MASK_EXTR(hw_mtrr.msr_mtrr_cap, MTRRcap_VCNT));
- return -EINVAL;
- }
+ hvm_get_guest_pat(v, &hw_mtrr.msr_pat_cr);
+
+ for ( i = 0; i < MASK_EXTR(hw_mtrr.msr_mtrr_cap, MTRRcap_VCNT); i++ )
+ {
+ /* save physbase */
+ hw_mtrr.msr_mtrr_var[i * 2] = mtrr_state->var_ranges->base;
+ /* save physmask */
+ hw_mtrr.msr_mtrr_var[i * 2 + 1] = mtrr_state->var_ranges->mask;
+ }
- hvm_get_guest_pat(v, &hw_mtrr.msr_pat_cr);
+ memcpy(hw_mtrr.msr_mtrr_fixed, mtrr_state->fixed_ranges, NUM_FIXED_MSR);
- for ( i = 0; i < MASK_EXTR(hw_mtrr.msr_mtrr_cap, MTRRcap_VCNT); i++ )
- {
- /* save physbase */
- hw_mtrr.msr_mtrr_var[i*2] =
- ((uint64_t*)mtrr_state->var_ranges)[i*2];
- /* save physmask */
- hw_mtrr.msr_mtrr_var[i*2+1] =
- ((uint64_t*)mtrr_state->var_ranges)[i*2+1];
- }
+ return hvm_save_entry(MTRR, v->vcpu_id, h, &hw_mtrr);
+}
- for ( i = 0; i < NUM_FIXED_MSR; i++ )
- hw_mtrr.msr_mtrr_fixed[i] =
- ((uint64_t*)mtrr_state->fixed_ranges)[i];
+static int hvm_save_mtrr_msr(struct domain *d, hvm_domain_context_t *h)
+{
+ struct vcpu *v;
+ int err = 0;
- if ( hvm_save_entry(MTRR, v->vcpu_id, h, &hw_mtrr) != 0 )
- return 1;
+ /* save mtrr&pat */
+ for_each_vcpu(d, v)
+ {
+ err = hvm_save_mtrr_msr_one(v, h);
+ if ( err )
+ break;
}
- return 0;
+ return err;
}
static int hvm_load_mtrr_msr(struct domain *d, hvm_domain_context_t *h)
--
2.7.4
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
^ permalink raw reply related [flat|nested] 26+ messages in thread
* [PATCH v15 07/14] x86/hvm: Introduce viridian_save_vcpu_ctxt_one() func
2018-08-03 13:53 [PATCH v15 00/14] x86/domctl: Save info for one vcpu instance Alexandru Isaila
` (5 preceding siblings ...)
2018-08-03 13:53 ` [PATCH v15 06/14] x86/hvm: Introduce hvm_save_mtrr_msr_one func Alexandru Isaila
@ 2018-08-03 13:53 ` Alexandru Isaila
2018-08-03 13:53 ` [PATCH v15 08/14] x86/hvm: Introduce lapic_save_hidden_one Alexandru Isaila
` (7 subsequent siblings)
14 siblings, 0 replies; 26+ messages in thread
From: Alexandru Isaila @ 2018-08-03 13:53 UTC (permalink / raw)
To: xen-devel
Cc: wei.liu2, andrew.cooper3, ian.jackson, paul.durrant, jbeulich,
Alexandru Isaila
This is used to save data from a single instance.
Signed-off-by: Alexandru Isaila <aisaila@bitdefender.com>
Reviewed-by: Paul Durrant <paul.durrant@citrix.com>
---
Changes since V14:
- Moved all the operations in the initializer.
---
xen/arch/x86/hvm/viridian.c | 30 +++++++++++++++++++-----------
1 file changed, 19 insertions(+), 11 deletions(-)
diff --git a/xen/arch/x86/hvm/viridian.c b/xen/arch/x86/hvm/viridian.c
index 694eae6..3f52d38 100644
--- a/xen/arch/x86/hvm/viridian.c
+++ b/xen/arch/x86/hvm/viridian.c
@@ -1026,24 +1026,32 @@ static int viridian_load_domain_ctxt(struct domain *d, hvm_domain_context_t *h)
HVM_REGISTER_SAVE_RESTORE(VIRIDIAN_DOMAIN, viridian_save_domain_ctxt,
viridian_load_domain_ctxt, 1, HVMSR_PER_DOM);
-static int viridian_save_vcpu_ctxt(struct domain *d, hvm_domain_context_t *h)
+static int viridian_save_vcpu_ctxt_one(struct vcpu *v, hvm_domain_context_t *h)
{
- struct vcpu *v;
+ struct hvm_viridian_vcpu_context ctxt = {
+ .vp_assist_msr = v->arch.hvm_vcpu.viridian.vp_assist.msr.raw,
+ .vp_assist_pending = v->arch.hvm_vcpu.viridian.vp_assist.pending,
+ };
- if ( !is_viridian_domain(d) )
+ if ( !is_viridian_domain(v->domain) )
return 0;
- for_each_vcpu( d, v ) {
- struct hvm_viridian_vcpu_context ctxt = {
- .vp_assist_msr = v->arch.hvm_vcpu.viridian.vp_assist.msr.raw,
- .vp_assist_pending = v->arch.hvm_vcpu.viridian.vp_assist.pending,
- };
+ return hvm_save_entry(VIRIDIAN_VCPU, v->vcpu_id, h, &ctxt);
+}
+
+static int viridian_save_vcpu_ctxt(struct domain *d, hvm_domain_context_t *h)
+{
+ struct vcpu *v;
+ int err = 0;
- if ( hvm_save_entry(VIRIDIAN_VCPU, v->vcpu_id, h, &ctxt) != 0 )
- return 1;
+ for_each_vcpu ( d, v )
+ {
+ err = viridian_save_vcpu_ctxt_one(v, h);
+ if ( err )
+ break;
}
- return 0;
+ return err;
}
static int viridian_load_vcpu_ctxt(struct domain *d, hvm_domain_context_t *h)
--
2.7.4
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
^ permalink raw reply related [flat|nested] 26+ messages in thread
* [PATCH v15 08/14] x86/hvm: Introduce lapic_save_hidden_one
2018-08-03 13:53 [PATCH v15 00/14] x86/domctl: Save info for one vcpu instance Alexandru Isaila
` (6 preceding siblings ...)
2018-08-03 13:53 ` [PATCH v15 07/14] x86/hvm: Introduce viridian_save_vcpu_ctxt_one() func Alexandru Isaila
@ 2018-08-03 13:53 ` Alexandru Isaila
2018-08-03 13:53 ` [PATCH v15 09/14] x86/hvm: Introduce lapic_save_regs_one func Alexandru Isaila
` (6 subsequent siblings)
14 siblings, 0 replies; 26+ messages in thread
From: Alexandru Isaila @ 2018-08-03 13:53 UTC (permalink / raw)
To: xen-devel
Cc: wei.liu2, andrew.cooper3, ian.jackson, paul.durrant, jbeulich,
Alexandru Isaila
This is used to save data from a single instance.
Signed-off-by: Alexandru Isaila <aisaila@bitdefender.com>
---
xen/arch/x86/hvm/vlapic.c | 22 ++++++++++++++--------
1 file changed, 14 insertions(+), 8 deletions(-)
diff --git a/xen/arch/x86/hvm/vlapic.c b/xen/arch/x86/hvm/vlapic.c
index 1b9f00a..0795161 100644
--- a/xen/arch/x86/hvm/vlapic.c
+++ b/xen/arch/x86/hvm/vlapic.c
@@ -1435,23 +1435,29 @@ static void lapic_rearm(struct vlapic *s)
s->timer_last_update = s->pt.last_plt_gtime;
}
-static int lapic_save_hidden(struct domain *d, hvm_domain_context_t *h)
+static int lapic_save_hidden_one(struct vcpu *v, hvm_domain_context_t *h)
{
- struct vcpu *v;
- struct vlapic *s;
- int rc = 0;
+ struct vlapic *s = vcpu_vlapic(v);
- if ( !has_vlapic(d) )
+ if ( !has_vlapic(v->domain) )
return 0;
+ return hvm_save_entry(LAPIC, v->vcpu_id, h, &s->hw);
+}
+
+static int lapic_save_hidden(struct domain *d, hvm_domain_context_t *h)
+{
+ struct vcpu *v;
+ int err = 0;
+
for_each_vcpu ( d, v )
{
- s = vcpu_vlapic(v);
- if ( (rc = hvm_save_entry(LAPIC, v->vcpu_id, h, &s->hw)) != 0 )
+ err = lapic_save_hidden_one(v, h);
+ if ( err )
break;
}
- return rc;
+ return err;
}
static int lapic_save_regs(struct domain *d, hvm_domain_context_t *h)
--
2.7.4
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
^ permalink raw reply related [flat|nested] 26+ messages in thread
* [PATCH v15 09/14] x86/hvm: Introduce lapic_save_regs_one func
2018-08-03 13:53 [PATCH v15 00/14] x86/domctl: Save info for one vcpu instance Alexandru Isaila
` (7 preceding siblings ...)
2018-08-03 13:53 ` [PATCH v15 08/14] x86/hvm: Introduce lapic_save_hidden_one Alexandru Isaila
@ 2018-08-03 13:53 ` Alexandru Isaila
2018-08-07 12:09 ` Jan Beulich
2018-08-03 13:53 ` [PATCH v15 10/14] x86/hvm: Add handler for save_one funcs Alexandru Isaila
` (5 subsequent siblings)
14 siblings, 1 reply; 26+ messages in thread
From: Alexandru Isaila @ 2018-08-03 13:53 UTC (permalink / raw)
To: xen-devel
Cc: wei.liu2, andrew.cooper3, ian.jackson, paul.durrant, jbeulich,
Alexandru Isaila
This is used to save data from a single instance.
Signed-off-by: Alexandru Isaila <aisaila@bitdefender.com>
---
xen/arch/x86/hvm/vlapic.c | 27 +++++++++++++++++++--------
1 file changed, 19 insertions(+), 8 deletions(-)
diff --git a/xen/arch/x86/hvm/vlapic.c b/xen/arch/x86/hvm/vlapic.c
index 0795161..d35810e 100644
--- a/xen/arch/x86/hvm/vlapic.c
+++ b/xen/arch/x86/hvm/vlapic.c
@@ -1460,26 +1460,37 @@ static int lapic_save_hidden(struct domain *d, hvm_domain_context_t *h)
return err;
}
+static int lapic_save_regs_one(struct vcpu *v, hvm_domain_context_t *h)
+{
+ struct vlapic *s;
+
+ if ( !has_vlapic(v->domain) )
+ return 0;
+
+ if ( hvm_funcs.sync_pir_to_irr )
+ hvm_funcs.sync_pir_to_irr(v);
+
+ s = vcpu_vlapic(v);
+
+ return hvm_save_entry(LAPIC_REGS, v->vcpu_id, h, s->regs);
+}
+
static int lapic_save_regs(struct domain *d, hvm_domain_context_t *h)
{
struct vcpu *v;
- struct vlapic *s;
- int rc = 0;
+ int err = 0;
if ( !has_vlapic(d) )
return 0;
for_each_vcpu ( d, v )
{
- if ( hvm_funcs.sync_pir_to_irr )
- hvm_funcs.sync_pir_to_irr(v);
-
- s = vcpu_vlapic(v);
- if ( (rc = hvm_save_entry(LAPIC_REGS, v->vcpu_id, h, s->regs)) != 0 )
+ err = lapic_save_regs_one(v, h);
+ if ( err )
break;
}
- return rc;
+ return err;
}
/*
--
2.7.4
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
^ permalink raw reply related [flat|nested] 26+ messages in thread
* [PATCH v15 10/14] x86/hvm: Add handler for save_one funcs
2018-08-03 13:53 [PATCH v15 00/14] x86/domctl: Save info for one vcpu instance Alexandru Isaila
` (8 preceding siblings ...)
2018-08-03 13:53 ` [PATCH v15 09/14] x86/hvm: Introduce lapic_save_regs_one func Alexandru Isaila
@ 2018-08-03 13:53 ` Alexandru Isaila
2018-08-03 13:53 ` [PATCH v15 11/14] x86/domctl: Use hvm_save_vcpu_handler Alexandru Isaila
` (4 subsequent siblings)
14 siblings, 0 replies; 26+ messages in thread
From: Alexandru Isaila @ 2018-08-03 13:53 UTC (permalink / raw)
To: xen-devel
Cc: wei.liu2, andrew.cooper3, ian.jackson, paul.durrant, jbeulich,
Alexandru Isaila
Signed-off-by: Alexandru Isaila <aisaila@bitdefender.com>
---
Changes since V14:
- Change handler name from hvm_save_one_handler to
hvm_save_vcpu_handler.
---
xen/arch/x86/cpu/mcheck/vmce.c | 1 +
xen/arch/x86/hvm/hpet.c | 2 +-
xen/arch/x86/hvm/hvm.c | 6 +++++-
xen/arch/x86/hvm/i8254.c | 2 +-
xen/arch/x86/hvm/irq.c | 6 +++---
xen/arch/x86/hvm/mtrr.c | 4 ++--
xen/arch/x86/hvm/pmtimer.c | 2 +-
xen/arch/x86/hvm/rtc.c | 2 +-
xen/arch/x86/hvm/save.c | 3 +++
xen/arch/x86/hvm/vioapic.c | 2 +-
xen/arch/x86/hvm/viridian.c | 3 ++-
xen/arch/x86/hvm/vlapic.c | 4 ++--
xen/arch/x86/hvm/vpic.c | 2 +-
xen/include/asm-x86/hvm/save.h | 6 +++++-
14 files changed, 29 insertions(+), 16 deletions(-)
diff --git a/xen/arch/x86/cpu/mcheck/vmce.c b/xen/arch/x86/cpu/mcheck/vmce.c
index 31e553c..35044d7 100644
--- a/xen/arch/x86/cpu/mcheck/vmce.c
+++ b/xen/arch/x86/cpu/mcheck/vmce.c
@@ -396,6 +396,7 @@ static int vmce_load_vcpu_ctxt(struct domain *d, hvm_domain_context_t *h)
}
HVM_REGISTER_SAVE_RESTORE(VMCE_VCPU, vmce_save_vcpu_ctxt,
+ vmce_save_vcpu_ctxt_one,
vmce_load_vcpu_ctxt, 1, HVMSR_PER_VCPU);
/*
diff --git a/xen/arch/x86/hvm/hpet.c b/xen/arch/x86/hvm/hpet.c
index 2837709..aff8613 100644
--- a/xen/arch/x86/hvm/hpet.c
+++ b/xen/arch/x86/hvm/hpet.c
@@ -640,7 +640,7 @@ static int hpet_load(struct domain *d, hvm_domain_context_t *h)
return 0;
}
-HVM_REGISTER_SAVE_RESTORE(HPET, hpet_save, hpet_load, 1, HVMSR_PER_DOM);
+HVM_REGISTER_SAVE_RESTORE(HPET, hpet_save, NULL, hpet_load, 1, HVMSR_PER_DOM);
static void hpet_set(HPETState *h)
{
diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 7df8744..4a70251 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -785,6 +785,7 @@ static int hvm_load_tsc_adjust(struct domain *d, hvm_domain_context_t *h)
}
HVM_REGISTER_SAVE_RESTORE(TSC_ADJUST, hvm_save_tsc_adjust,
+ hvm_save_tsc_adjust_one,
hvm_load_tsc_adjust, 1, HVMSR_PER_VCPU);
static int hvm_save_cpu_ctxt_one(struct vcpu *v, hvm_domain_context_t *h)
@@ -1180,7 +1181,8 @@ static int hvm_load_cpu_ctxt(struct domain *d, hvm_domain_context_t *h)
return 0;
}
-HVM_REGISTER_SAVE_RESTORE(CPU, hvm_save_cpu_ctxt, hvm_load_cpu_ctxt,
+HVM_REGISTER_SAVE_RESTORE(CPU, hvm_save_cpu_ctxt, hvm_save_cpu_ctxt_one,
+ hvm_load_cpu_ctxt,
1, HVMSR_PER_VCPU);
#define HVM_CPU_XSAVE_SIZE(xcr0) (offsetof(struct hvm_hw_cpu_xsave, \
@@ -1533,6 +1535,7 @@ static int __init hvm_register_CPU_save_and_restore(void)
hvm_register_savevm(CPU_XSAVE_CODE,
"CPU_XSAVE",
hvm_save_cpu_xsave_states,
+ hvm_save_cpu_xsave_states_one,
hvm_load_cpu_xsave_states,
HVM_CPU_XSAVE_SIZE(xfeature_mask) +
sizeof(struct hvm_save_descriptor),
@@ -1545,6 +1548,7 @@ static int __init hvm_register_CPU_save_and_restore(void)
hvm_register_savevm(CPU_MSR_CODE,
"CPU_MSR",
hvm_save_cpu_msrs,
+ hvm_save_cpu_msrs_one,
hvm_load_cpu_msrs,
HVM_CPU_MSR_SIZE(msr_count_max) +
sizeof(struct hvm_save_descriptor),
diff --git a/xen/arch/x86/hvm/i8254.c b/xen/arch/x86/hvm/i8254.c
index 992f08d..ec77b23 100644
--- a/xen/arch/x86/hvm/i8254.c
+++ b/xen/arch/x86/hvm/i8254.c
@@ -437,7 +437,7 @@ static int pit_load(struct domain *d, hvm_domain_context_t *h)
return 0;
}
-HVM_REGISTER_SAVE_RESTORE(PIT, pit_save, pit_load, 1, HVMSR_PER_DOM);
+HVM_REGISTER_SAVE_RESTORE(PIT, pit_save, NULL, pit_load, 1, HVMSR_PER_DOM);
void pit_reset(struct domain *d)
{
diff --git a/xen/arch/x86/hvm/irq.c b/xen/arch/x86/hvm/irq.c
index c85d004..770eab7 100644
--- a/xen/arch/x86/hvm/irq.c
+++ b/xen/arch/x86/hvm/irq.c
@@ -764,9 +764,9 @@ static int irq_load_link(struct domain *d, hvm_domain_context_t *h)
return 0;
}
-HVM_REGISTER_SAVE_RESTORE(PCI_IRQ, irq_save_pci, irq_load_pci,
+HVM_REGISTER_SAVE_RESTORE(PCI_IRQ, irq_save_pci, NULL, irq_load_pci,
1, HVMSR_PER_DOM);
-HVM_REGISTER_SAVE_RESTORE(ISA_IRQ, irq_save_isa, irq_load_isa,
+HVM_REGISTER_SAVE_RESTORE(ISA_IRQ, irq_save_isa, NULL, irq_load_isa,
1, HVMSR_PER_DOM);
-HVM_REGISTER_SAVE_RESTORE(PCI_LINK, irq_save_link, irq_load_link,
+HVM_REGISTER_SAVE_RESTORE(PCI_LINK, irq_save_link, NULL, irq_load_link,
1, HVMSR_PER_DOM);
diff --git a/xen/arch/x86/hvm/mtrr.c b/xen/arch/x86/hvm/mtrr.c
index 2d5af72..d4aa026 100644
--- a/xen/arch/x86/hvm/mtrr.c
+++ b/xen/arch/x86/hvm/mtrr.c
@@ -819,8 +819,8 @@ static int hvm_load_mtrr_msr(struct domain *d, hvm_domain_context_t *h)
return 0;
}
-HVM_REGISTER_SAVE_RESTORE(MTRR, hvm_save_mtrr_msr, hvm_load_mtrr_msr,
- 1, HVMSR_PER_VCPU);
+HVM_REGISTER_SAVE_RESTORE(MTRR, hvm_save_mtrr_msr, hvm_save_mtrr_msr_one,
+ hvm_load_mtrr_msr, 1, HVMSR_PER_VCPU);
void memory_type_changed(struct domain *d)
{
diff --git a/xen/arch/x86/hvm/pmtimer.c b/xen/arch/x86/hvm/pmtimer.c
index 435647f..0a5e8ce 100644
--- a/xen/arch/x86/hvm/pmtimer.c
+++ b/xen/arch/x86/hvm/pmtimer.c
@@ -309,7 +309,7 @@ static int acpi_load(struct domain *d, hvm_domain_context_t *h)
return 0;
}
-HVM_REGISTER_SAVE_RESTORE(PMTIMER, acpi_save, acpi_load,
+HVM_REGISTER_SAVE_RESTORE(PMTIMER, acpi_save, NULL, acpi_load,
1, HVMSR_PER_DOM);
int pmtimer_change_ioport(struct domain *d, unsigned int version)
diff --git a/xen/arch/x86/hvm/rtc.c b/xen/arch/x86/hvm/rtc.c
index cb75b99..ce7e71b 100644
--- a/xen/arch/x86/hvm/rtc.c
+++ b/xen/arch/x86/hvm/rtc.c
@@ -783,7 +783,7 @@ static int rtc_load(struct domain *d, hvm_domain_context_t *h)
return 0;
}
-HVM_REGISTER_SAVE_RESTORE(RTC, rtc_save, rtc_load, 1, HVMSR_PER_DOM);
+HVM_REGISTER_SAVE_RESTORE(RTC, rtc_save, NULL, rtc_load, 1, HVMSR_PER_DOM);
void rtc_reset(struct domain *d)
{
diff --git a/xen/arch/x86/hvm/save.c b/xen/arch/x86/hvm/save.c
index 422b96c..1106b96 100644
--- a/xen/arch/x86/hvm/save.c
+++ b/xen/arch/x86/hvm/save.c
@@ -85,6 +85,7 @@ int arch_hvm_load(struct domain *d, struct hvm_save_header *hdr)
/* List of handlers for various HVM save and restore types */
static struct {
hvm_save_handler save;
+ hvm_save_vcpu_handler save_one;
hvm_load_handler load;
const char *name;
size_t size;
@@ -95,6 +96,7 @@ static struct {
void __init hvm_register_savevm(uint16_t typecode,
const char *name,
hvm_save_handler save_state,
+ hvm_save_vcpu_handler save_one,
hvm_load_handler load_state,
size_t size, int kind)
{
@@ -102,6 +104,7 @@ void __init hvm_register_savevm(uint16_t typecode,
ASSERT(hvm_sr_handlers[typecode].save == NULL);
ASSERT(hvm_sr_handlers[typecode].load == NULL);
hvm_sr_handlers[typecode].save = save_state;
+ hvm_sr_handlers[typecode].save_one = save_one;
hvm_sr_handlers[typecode].load = load_state;
hvm_sr_handlers[typecode].name = name;
hvm_sr_handlers[typecode].size = size;
diff --git a/xen/arch/x86/hvm/vioapic.c b/xen/arch/x86/hvm/vioapic.c
index 97b419f..66f54e4 100644
--- a/xen/arch/x86/hvm/vioapic.c
+++ b/xen/arch/x86/hvm/vioapic.c
@@ -601,7 +601,7 @@ static int ioapic_load(struct domain *d, hvm_domain_context_t *h)
return hvm_load_entry(IOAPIC, h, &s->domU);
}
-HVM_REGISTER_SAVE_RESTORE(IOAPIC, ioapic_save, ioapic_load, 1, HVMSR_PER_DOM);
+HVM_REGISTER_SAVE_RESTORE(IOAPIC, ioapic_save, NULL, ioapic_load, 1, HVMSR_PER_DOM);
void vioapic_reset(struct domain *d)
{
diff --git a/xen/arch/x86/hvm/viridian.c b/xen/arch/x86/hvm/viridian.c
index 3f52d38..268ccce 100644
--- a/xen/arch/x86/hvm/viridian.c
+++ b/xen/arch/x86/hvm/viridian.c
@@ -1023,7 +1023,7 @@ static int viridian_load_domain_ctxt(struct domain *d, hvm_domain_context_t *h)
return 0;
}
-HVM_REGISTER_SAVE_RESTORE(VIRIDIAN_DOMAIN, viridian_save_domain_ctxt,
+HVM_REGISTER_SAVE_RESTORE(VIRIDIAN_DOMAIN, viridian_save_domain_ctxt, NULL,
viridian_load_domain_ctxt, 1, HVMSR_PER_DOM);
static int viridian_save_vcpu_ctxt_one(struct vcpu *v, hvm_domain_context_t *h)
@@ -1085,6 +1085,7 @@ static int viridian_load_vcpu_ctxt(struct domain *d, hvm_domain_context_t *h)
}
HVM_REGISTER_SAVE_RESTORE(VIRIDIAN_VCPU, viridian_save_vcpu_ctxt,
+ viridian_save_vcpu_ctxt_one,
viridian_load_vcpu_ctxt, 1, HVMSR_PER_VCPU);
static int __init parse_viridian_version(const char *arg)
diff --git a/xen/arch/x86/hvm/vlapic.c b/xen/arch/x86/hvm/vlapic.c
index d35810e..669075e 100644
--- a/xen/arch/x86/hvm/vlapic.c
+++ b/xen/arch/x86/hvm/vlapic.c
@@ -1593,9 +1593,9 @@ static int lapic_load_regs(struct domain *d, hvm_domain_context_t *h)
return 0;
}
-HVM_REGISTER_SAVE_RESTORE(LAPIC, lapic_save_hidden, lapic_load_hidden,
+HVM_REGISTER_SAVE_RESTORE(LAPIC, lapic_save_hidden, lapic_save_hidden_one, lapic_load_hidden,
1, HVMSR_PER_VCPU);
-HVM_REGISTER_SAVE_RESTORE(LAPIC_REGS, lapic_save_regs, lapic_load_regs,
+HVM_REGISTER_SAVE_RESTORE(LAPIC_REGS, lapic_save_regs, lapic_save_regs_one, lapic_load_regs,
1, HVMSR_PER_VCPU);
int vlapic_init(struct vcpu *v)
diff --git a/xen/arch/x86/hvm/vpic.c b/xen/arch/x86/hvm/vpic.c
index e160bbd..ca9b4cb 100644
--- a/xen/arch/x86/hvm/vpic.c
+++ b/xen/arch/x86/hvm/vpic.c
@@ -411,7 +411,7 @@ static int vpic_load(struct domain *d, hvm_domain_context_t *h)
return 0;
}
-HVM_REGISTER_SAVE_RESTORE(PIC, vpic_save, vpic_load, 2, HVMSR_PER_DOM);
+HVM_REGISTER_SAVE_RESTORE(PIC, vpic_save, NULL, vpic_load, 2, HVMSR_PER_DOM);
void vpic_reset(struct domain *d)
{
diff --git a/xen/include/asm-x86/hvm/save.h b/xen/include/asm-x86/hvm/save.h
index f889e8f..f2283fc 100644
--- a/xen/include/asm-x86/hvm/save.h
+++ b/xen/include/asm-x86/hvm/save.h
@@ -97,6 +97,8 @@ static inline uint16_t hvm_load_instance(struct hvm_domain_context *h)
* restoring. Both return non-zero on error. */
typedef int (*hvm_save_handler) (struct domain *d,
hvm_domain_context_t *h);
+typedef int (*hvm_save_vcpu_handler)(struct vcpu *v,
+ hvm_domain_context_t *h);
typedef int (*hvm_load_handler) (struct domain *d,
hvm_domain_context_t *h);
@@ -105,6 +107,7 @@ typedef int (*hvm_load_handler) (struct domain *d,
void hvm_register_savevm(uint16_t typecode,
const char *name,
hvm_save_handler save_state,
+ hvm_save_vcpu_handler save_one,
hvm_load_handler load_state,
size_t size, int kind);
@@ -114,12 +117,13 @@ void hvm_register_savevm(uint16_t typecode,
/* Syntactic sugar around that function: specify the max number of
* saves, and this calculates the size of buffer needed */
-#define HVM_REGISTER_SAVE_RESTORE(_x, _save, _load, _num, _k) \
+#define HVM_REGISTER_SAVE_RESTORE(_x, _save, _save_one, _load, _num, _k) \
static int __init __hvm_register_##_x##_save_and_restore(void) \
{ \
hvm_register_savevm(HVM_SAVE_CODE(_x), \
#_x, \
&_save, \
+ _save_one, \
&_load, \
(_num) * (HVM_SAVE_LENGTH(_x) \
+ sizeof (struct hvm_save_descriptor)), \
--
2.7.4
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
^ permalink raw reply related [flat|nested] 26+ messages in thread
* [PATCH v15 11/14] x86/domctl: Use hvm_save_vcpu_handler
2018-08-03 13:53 [PATCH v15 00/14] x86/domctl: Save info for one vcpu instance Alexandru Isaila
` (9 preceding siblings ...)
2018-08-03 13:53 ` [PATCH v15 10/14] x86/hvm: Add handler for save_one funcs Alexandru Isaila
@ 2018-08-03 13:53 ` Alexandru Isaila
2018-08-07 12:25 ` Jan Beulich
2018-08-03 13:53 ` [PATCH v15 12/14] x86/hvm: Drop the use of save functions Alexandru Isaila
` (3 subsequent siblings)
14 siblings, 1 reply; 26+ messages in thread
From: Alexandru Isaila @ 2018-08-03 13:53 UTC (permalink / raw)
To: xen-devel
Cc: wei.liu2, andrew.cooper3, ian.jackson, paul.durrant, jbeulich,
Alexandru Isaila
This patch is aimed on using the new save_one fuctions in the hvm_save
Signed-off-by: Alexandru Isaila <aisaila@bitdefender.com>
---
Changes since V14:
- Removed the modification from the hvm_save_one
- Removed vcpu init
- Declared rc as int
- Add vcpu id to the log print.
---
xen/arch/x86/hvm/save.c | 28 ++++++++++++++++++++++++++--
1 file changed, 26 insertions(+), 2 deletions(-)
diff --git a/xen/arch/x86/hvm/save.c b/xen/arch/x86/hvm/save.c
index 1106b96..61565fe 100644
--- a/xen/arch/x86/hvm/save.c
+++ b/xen/arch/x86/hvm/save.c
@@ -196,7 +196,10 @@ int hvm_save(struct domain *d, hvm_domain_context_t *h)
struct hvm_save_header hdr;
struct hvm_save_end end;
hvm_save_handler handler;
+ hvm_save_vcpu_handler save_one_handler;
unsigned int i;
+ int rc;
+ struct vcpu *v;
if ( d->is_dying )
return -EINVAL;
@@ -224,11 +227,32 @@ int hvm_save(struct domain *d, hvm_domain_context_t *h)
for ( i = 0; i <= HVM_SAVE_CODE_MAX; i++ )
{
handler = hvm_sr_handlers[i].save;
- if ( handler != NULL )
+ save_one_handler = hvm_sr_handlers[i].save_one;
+ if ( save_one_handler != NULL )
+ {
+ for_each_vcpu ( d, v )
+ {
+ printk(XENLOG_G_INFO "HVM %pv save: %s\n",
+ v, hvm_sr_handlers[i].name);
+ rc = save_one_handler(v, h);
+
+ if( rc != 0 )
+ {
+ printk(XENLOG_G_ERR
+ "HVM %pv save: failed to save type %"PRIu16"\n",
+ v, i);
+ return -EFAULT;
+ }
+ }
+ }
+ else if ( handler != NULL )
{
printk(XENLOG_G_INFO "HVM%d save: %s\n",
d->domain_id, hvm_sr_handlers[i].name);
- if ( handler(d, h) != 0 )
+
+ rc = handler(d, h);
+
+ if( rc != 0 )
{
printk(XENLOG_G_ERR
"HVM%d save: failed to save type %"PRIu16"\n",
--
2.7.4
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
^ permalink raw reply related [flat|nested] 26+ messages in thread
* [PATCH v15 12/14] x86/hvm: Drop the use of save functions
2018-08-03 13:53 [PATCH v15 00/14] x86/domctl: Save info for one vcpu instance Alexandru Isaila
` (10 preceding siblings ...)
2018-08-03 13:53 ` [PATCH v15 11/14] x86/domctl: Use hvm_save_vcpu_handler Alexandru Isaila
@ 2018-08-03 13:53 ` Alexandru Isaila
2018-08-07 12:28 ` Jan Beulich
2018-08-07 12:41 ` Jan Beulich
2018-08-03 13:53 ` [PATCH v15 13/14] x86/hvm: Remove redundant " Alexandru Isaila
` (2 subsequent siblings)
14 siblings, 2 replies; 26+ messages in thread
From: Alexandru Isaila @ 2018-08-03 13:53 UTC (permalink / raw)
To: xen-devel
Cc: wei.liu2, andrew.cooper3, ian.jackson, paul.durrant, jbeulich,
Alexandru Isaila
This patch drops the use of save functions in hvm_save.
Signed-off-by: Alexandru Isaila <aisaila@bitdefender.com>
---
xen/arch/x86/hvm/save.c | 25 ++++---------------------
1 file changed, 4 insertions(+), 21 deletions(-)
diff --git a/xen/arch/x86/hvm/save.c b/xen/arch/x86/hvm/save.c
index 61565fe..363695c 100644
--- a/xen/arch/x86/hvm/save.c
+++ b/xen/arch/x86/hvm/save.c
@@ -195,8 +195,7 @@ int hvm_save(struct domain *d, hvm_domain_context_t *h)
char *c;
struct hvm_save_header hdr;
struct hvm_save_end end;
- hvm_save_handler handler;
- hvm_save_vcpu_handler save_one_handler;
+ hvm_save_vcpu_handler handler;
unsigned int i;
int rc;
struct vcpu *v;
@@ -226,15 +225,14 @@ int hvm_save(struct domain *d, hvm_domain_context_t *h)
/* Save all available kinds of state */
for ( i = 0; i <= HVM_SAVE_CODE_MAX; i++ )
{
- handler = hvm_sr_handlers[i].save;
- save_one_handler = hvm_sr_handlers[i].save_one;
- if ( save_one_handler != NULL )
+ handler = hvm_sr_handlers[i].save_one;
+ if ( handler != NULL )
{
for_each_vcpu ( d, v )
{
printk(XENLOG_G_INFO "HVM %pv save: %s\n",
v, hvm_sr_handlers[i].name);
- rc = save_one_handler(v, h);
+ rc = handler(v, h);
if( rc != 0 )
{
@@ -245,21 +243,6 @@ int hvm_save(struct domain *d, hvm_domain_context_t *h)
}
}
}
- else if ( handler != NULL )
- {
- printk(XENLOG_G_INFO "HVM%d save: %s\n",
- d->domain_id, hvm_sr_handlers[i].name);
-
- rc = handler(d, h);
-
- if( rc != 0 )
- {
- printk(XENLOG_G_ERR
- "HVM%d save: failed to save type %"PRIu16"\n",
- d->domain_id, i);
- return -EFAULT;
- }
- }
}
/* Save an end-of-file marker */
--
2.7.4
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
^ permalink raw reply related [flat|nested] 26+ messages in thread
* [PATCH v15 13/14] x86/hvm: Remove redundant save functions
2018-08-03 13:53 [PATCH v15 00/14] x86/domctl: Save info for one vcpu instance Alexandru Isaila
` (11 preceding siblings ...)
2018-08-03 13:53 ` [PATCH v15 12/14] x86/hvm: Drop the use of save functions Alexandru Isaila
@ 2018-08-03 13:53 ` Alexandru Isaila
2018-08-07 12:47 ` Jan Beulich
2018-08-03 13:53 ` [PATCH v15 14/14] x86/domctl: Don't pause the whole domain if only getting vcpu state Alexandru Isaila
2018-08-07 12:59 ` [PATCH v15 00/14] x86/domctl: Save info for one vcpu instance Jan Beulich
14 siblings, 1 reply; 26+ messages in thread
From: Alexandru Isaila @ 2018-08-03 13:53 UTC (permalink / raw)
To: xen-devel
Cc: wei.liu2, andrew.cooper3, ian.jackson, paul.durrant, jbeulich,
Alexandru Isaila
This patch removes the redundant save functions and renames the
save_one* to save. It then changes the domain param to vcpu in the
save funcs.
Signed-off-by: Alexandru Isaila <aisaila@bitdefender.com>
---
Changes since V14:
- Change vcpu to v
- Remove extra space
- Rename save_one handler to save
---
xen/arch/x86/cpu/mcheck/vmce.c | 18 +----------
xen/arch/x86/hvm/hpet.c | 7 ++--
xen/arch/x86/hvm/hvm.c | 73 +++---------------------------------------
xen/arch/x86/hvm/i8254.c | 5 +--
xen/arch/x86/hvm/irq.c | 15 +++++----
xen/arch/x86/hvm/mtrr.c | 19 ++---------
xen/arch/x86/hvm/pmtimer.c | 5 +--
xen/arch/x86/hvm/rtc.c | 5 +--
xen/arch/x86/hvm/save.c | 9 ++----
xen/arch/x86/hvm/vioapic.c | 5 +--
xen/arch/x86/hvm/viridian.c | 23 +++----------
xen/arch/x86/hvm/vlapic.c | 41 +++---------------------
xen/arch/x86/hvm/vpic.c | 5 +--
xen/include/asm-x86/hvm/save.h | 8 ++---
14 files changed, 49 insertions(+), 189 deletions(-)
diff --git a/xen/arch/x86/cpu/mcheck/vmce.c b/xen/arch/x86/cpu/mcheck/vmce.c
index 35044d7..763d56b 100644
--- a/xen/arch/x86/cpu/mcheck/vmce.c
+++ b/xen/arch/x86/cpu/mcheck/vmce.c
@@ -349,7 +349,7 @@ int vmce_wrmsr(uint32_t msr, uint64_t val)
return ret;
}
-static int vmce_save_vcpu_ctxt_one(struct vcpu *v, hvm_domain_context_t *h)
+static int vmce_save_vcpu_ctxt(struct vcpu *v, hvm_domain_context_t *h)
{
struct hvm_vmce_vcpu ctxt = {
.caps = v->arch.vmce.mcg_cap,
@@ -361,21 +361,6 @@ static int vmce_save_vcpu_ctxt_one(struct vcpu *v, hvm_domain_context_t *h)
return hvm_save_entry(VMCE_VCPU, v->vcpu_id, h, &ctxt);
}
-static int vmce_save_vcpu_ctxt(struct domain *d, hvm_domain_context_t *h)
-{
- struct vcpu *v;
- int err = 0;
-
- for_each_vcpu ( d, v )
- {
- err = vmce_save_vcpu_ctxt_one(v, h);
- if ( err )
- break;
- }
-
- return err;
-}
-
static int vmce_load_vcpu_ctxt(struct domain *d, hvm_domain_context_t *h)
{
unsigned int vcpuid = hvm_load_instance(h);
@@ -396,7 +381,6 @@ static int vmce_load_vcpu_ctxt(struct domain *d, hvm_domain_context_t *h)
}
HVM_REGISTER_SAVE_RESTORE(VMCE_VCPU, vmce_save_vcpu_ctxt,
- vmce_save_vcpu_ctxt_one,
vmce_load_vcpu_ctxt, 1, HVMSR_PER_VCPU);
/*
diff --git a/xen/arch/x86/hvm/hpet.c b/xen/arch/x86/hvm/hpet.c
index aff8613..4afa2ab 100644
--- a/xen/arch/x86/hvm/hpet.c
+++ b/xen/arch/x86/hvm/hpet.c
@@ -516,16 +516,17 @@ static const struct hvm_mmio_ops hpet_mmio_ops = {
};
-static int hpet_save(struct domain *d, hvm_domain_context_t *h)
+static int hpet_save(struct vcpu *v, hvm_domain_context_t *h)
{
+ struct domain *d = v->domain;
HPETState *hp = domain_vhpet(d);
- struct vcpu *v = pt_global_vcpu_target(d);
int rc;
uint64_t guest_time;
if ( !has_vhpet(d) )
return 0;
+ v = pt_global_vcpu_target(d);
write_lock(&hp->lock);
guest_time = (v->arch.hvm_vcpu.guest_time ?: hvm_get_guest_time(v)) /
STIME_PER_HPET_TICK;
@@ -640,7 +641,7 @@ static int hpet_load(struct domain *d, hvm_domain_context_t *h)
return 0;
}
-HVM_REGISTER_SAVE_RESTORE(HPET, hpet_save, NULL, hpet_load, 1, HVMSR_PER_DOM);
+HVM_REGISTER_SAVE_RESTORE(HPET, hpet_save, hpet_load, 1, HVMSR_PER_DOM);
static void hpet_set(HPETState *h)
{
diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 4a70251..831f86b 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -740,7 +740,7 @@ void hvm_domain_destroy(struct domain *d)
destroy_vpci_mmcfg(d);
}
-static int hvm_save_tsc_adjust_one(struct vcpu *v, hvm_domain_context_t *h)
+static int hvm_save_tsc_adjust(struct vcpu *v, hvm_domain_context_t *h)
{
struct hvm_tsc_adjust ctxt = {
.tsc_adjust = v->arch.hvm_vcpu.msr_tsc_adjust,
@@ -749,21 +749,6 @@ static int hvm_save_tsc_adjust_one(struct vcpu *v, hvm_domain_context_t *h)
return hvm_save_entry(TSC_ADJUST, v->vcpu_id, h, &ctxt);
}
-static int hvm_save_tsc_adjust(struct domain *d, hvm_domain_context_t *h)
-{
- struct vcpu *v;
- int err = 0;
-
- for_each_vcpu ( d, v )
- {
- err = hvm_save_tsc_adjust_one(v, h);
- if ( err )
- break;
- }
-
- return err;
-}
-
static int hvm_load_tsc_adjust(struct domain *d, hvm_domain_context_t *h)
{
unsigned int vcpuid = hvm_load_instance(h);
@@ -785,10 +770,9 @@ static int hvm_load_tsc_adjust(struct domain *d, hvm_domain_context_t *h)
}
HVM_REGISTER_SAVE_RESTORE(TSC_ADJUST, hvm_save_tsc_adjust,
- hvm_save_tsc_adjust_one,
hvm_load_tsc_adjust, 1, HVMSR_PER_VCPU);
-static int hvm_save_cpu_ctxt_one(struct vcpu *v, hvm_domain_context_t *h)
+static int hvm_save_cpu_ctxt(struct vcpu *v, hvm_domain_context_t *h)
{
struct segment_register seg;
struct hvm_hw_cpu ctxt = {
@@ -895,21 +879,6 @@ static int hvm_save_cpu_ctxt_one(struct vcpu *v, hvm_domain_context_t *h)
return hvm_save_entry(CPU, v->vcpu_id, h, &ctxt);
}
-static int hvm_save_cpu_ctxt(struct domain *d, hvm_domain_context_t *h)
-{
- struct vcpu *v;
- int err = 0;
-
- for_each_vcpu ( d, v )
- {
- err = hvm_save_cpu_ctxt_one(v, h);
- if ( err )
- break;
- }
-
- return err;
-}
-
/* Return a string indicating the error, or NULL for valid. */
const char *hvm_efer_valid(const struct vcpu *v, uint64_t value,
signed int cr0_pg)
@@ -1181,7 +1150,7 @@ static int hvm_load_cpu_ctxt(struct domain *d, hvm_domain_context_t *h)
return 0;
}
-HVM_REGISTER_SAVE_RESTORE(CPU, hvm_save_cpu_ctxt, hvm_save_cpu_ctxt_one,
+HVM_REGISTER_SAVE_RESTORE(CPU, hvm_save_cpu_ctxt,
hvm_load_cpu_ctxt,
1, HVMSR_PER_VCPU);
@@ -1189,7 +1158,7 @@ HVM_REGISTER_SAVE_RESTORE(CPU, hvm_save_cpu_ctxt, hvm_save_cpu_ctxt_one,
save_area) + \
xstate_ctxt_size(xcr0))
-static int hvm_save_cpu_xsave_states_one(struct vcpu *v, hvm_domain_context_t *h)
+static int hvm_save_cpu_xsave_states(struct vcpu *v, hvm_domain_context_t *h)
{
struct hvm_hw_cpu_xsave *ctxt;
unsigned int size = HVM_CPU_XSAVE_SIZE(v->arch.xcr0_accum);
@@ -1214,21 +1183,6 @@ static int hvm_save_cpu_xsave_states_one(struct vcpu *v, hvm_domain_context_t *h
return 0;
}
-static int hvm_save_cpu_xsave_states(struct domain *d, hvm_domain_context_t *h)
-{
- struct vcpu *v;
- int err = 0;
-
- for_each_vcpu ( d, v )
- {
- err = hvm_save_cpu_xsave_states_one(v, h);
- if ( err )
- break;
- }
-
- return err;
-}
-
/*
* Structure layout conformity checks, documenting correctness of the cast in
* the invocation of validate_xstate() below.
@@ -1366,7 +1320,7 @@ static const uint32_t msrs_to_send[] = {
};
static unsigned int __read_mostly msr_count_max = ARRAY_SIZE(msrs_to_send);
-static int hvm_save_cpu_msrs_one(struct vcpu *v, hvm_domain_context_t *h)
+static int hvm_save_cpu_msrs(struct vcpu *v, hvm_domain_context_t *h)
{
struct hvm_save_descriptor *desc = _p(&h->data[h->cur]);
struct hvm_msr *ctxt;
@@ -1428,21 +1382,6 @@ static int hvm_save_cpu_msrs_one(struct vcpu *v, hvm_domain_context_t *h)
return 0;
}
-static int hvm_save_cpu_msrs(struct domain *d, hvm_domain_context_t *h)
-{
- struct vcpu *v;
- int err = 0;
-
- for_each_vcpu ( d, v )
- {
- err = hvm_save_cpu_msrs_one(v, h);
- if ( err )
- break;
- }
-
- return err;
-}
-
static int hvm_load_cpu_msrs(struct domain *d, hvm_domain_context_t *h)
{
unsigned int i, vcpuid = hvm_load_instance(h);
@@ -1535,7 +1474,6 @@ static int __init hvm_register_CPU_save_and_restore(void)
hvm_register_savevm(CPU_XSAVE_CODE,
"CPU_XSAVE",
hvm_save_cpu_xsave_states,
- hvm_save_cpu_xsave_states_one,
hvm_load_cpu_xsave_states,
HVM_CPU_XSAVE_SIZE(xfeature_mask) +
sizeof(struct hvm_save_descriptor),
@@ -1548,7 +1486,6 @@ static int __init hvm_register_CPU_save_and_restore(void)
hvm_register_savevm(CPU_MSR_CODE,
"CPU_MSR",
hvm_save_cpu_msrs,
- hvm_save_cpu_msrs_one,
hvm_load_cpu_msrs,
HVM_CPU_MSR_SIZE(msr_count_max) +
sizeof(struct hvm_save_descriptor),
diff --git a/xen/arch/x86/hvm/i8254.c b/xen/arch/x86/hvm/i8254.c
index ec77b23..e0d2255 100644
--- a/xen/arch/x86/hvm/i8254.c
+++ b/xen/arch/x86/hvm/i8254.c
@@ -390,8 +390,9 @@ void pit_stop_channel0_irq(PITState *pit)
spin_unlock(&pit->lock);
}
-static int pit_save(struct domain *d, hvm_domain_context_t *h)
+static int pit_save(struct vcpu *v, hvm_domain_context_t *h)
{
+ struct domain *d = v->domain;
PITState *pit = domain_vpit(d);
int rc;
@@ -437,7 +438,7 @@ static int pit_load(struct domain *d, hvm_domain_context_t *h)
return 0;
}
-HVM_REGISTER_SAVE_RESTORE(PIT, pit_save, NULL, pit_load, 1, HVMSR_PER_DOM);
+HVM_REGISTER_SAVE_RESTORE(PIT, pit_save, pit_load, 1, HVMSR_PER_DOM);
void pit_reset(struct domain *d)
{
diff --git a/xen/arch/x86/hvm/irq.c b/xen/arch/x86/hvm/irq.c
index 770eab7..b37275c 100644
--- a/xen/arch/x86/hvm/irq.c
+++ b/xen/arch/x86/hvm/irq.c
@@ -630,8 +630,9 @@ static int __init dump_irq_info_key_init(void)
}
__initcall(dump_irq_info_key_init);
-static int irq_save_pci(struct domain *d, hvm_domain_context_t *h)
+static int irq_save_pci(struct vcpu *v, hvm_domain_context_t *h)
{
+ struct domain *d = v->domain;
struct hvm_irq *hvm_irq = hvm_domain_irq(d);
unsigned int asserted, pdev, pintx;
int rc;
@@ -662,16 +663,18 @@ static int irq_save_pci(struct domain *d, hvm_domain_context_t *h)
return rc;
}
-static int irq_save_isa(struct domain *d, hvm_domain_context_t *h)
+static int irq_save_isa(struct vcpu *v, hvm_domain_context_t *h)
{
+ struct domain *d = v->domain;
struct hvm_irq *hvm_irq = hvm_domain_irq(d);
/* Save ISA IRQ lines */
return ( hvm_save_entry(ISA_IRQ, 0, h, &hvm_irq->isa_irq) );
}
-static int irq_save_link(struct domain *d, hvm_domain_context_t *h)
+static int irq_save_link(struct vcpu *v, hvm_domain_context_t *h)
{
+ struct domain *d = v->domain;
struct hvm_irq *hvm_irq = hvm_domain_irq(d);
/* Save PCI-ISA link state */
@@ -764,9 +767,9 @@ static int irq_load_link(struct domain *d, hvm_domain_context_t *h)
return 0;
}
-HVM_REGISTER_SAVE_RESTORE(PCI_IRQ, irq_save_pci, NULL, irq_load_pci,
+HVM_REGISTER_SAVE_RESTORE(PCI_IRQ, irq_save_pci, irq_load_pci,
1, HVMSR_PER_DOM);
-HVM_REGISTER_SAVE_RESTORE(ISA_IRQ, irq_save_isa, NULL, irq_load_isa,
+HVM_REGISTER_SAVE_RESTORE(ISA_IRQ, irq_save_isa, irq_load_isa,
1, HVMSR_PER_DOM);
-HVM_REGISTER_SAVE_RESTORE(PCI_LINK, irq_save_link, NULL, irq_load_link,
+HVM_REGISTER_SAVE_RESTORE(PCI_LINK, irq_save_link, irq_load_link,
1, HVMSR_PER_DOM);
diff --git a/xen/arch/x86/hvm/mtrr.c b/xen/arch/x86/hvm/mtrr.c
index d4aa026..15650d0 100644
--- a/xen/arch/x86/hvm/mtrr.c
+++ b/xen/arch/x86/hvm/mtrr.c
@@ -718,7 +718,7 @@ int hvm_set_mem_pinned_cacheattr(struct domain *d, uint64_t gfn_start,
return 0;
}
-static int hvm_save_mtrr_msr_one(struct vcpu *v, hvm_domain_context_t *h)
+static int hvm_save_mtrr_msr(struct vcpu *v, hvm_domain_context_t *h)
{
const struct mtrr_state *mtrr_state = &v->arch.hvm_vcpu.mtrr;
struct hvm_hw_mtrr hw_mtrr = {
@@ -754,21 +754,6 @@ static int hvm_save_mtrr_msr_one(struct vcpu *v, hvm_domain_context_t *h)
return hvm_save_entry(MTRR, v->vcpu_id, h, &hw_mtrr);
}
-static int hvm_save_mtrr_msr(struct domain *d, hvm_domain_context_t *h)
-{
- struct vcpu *v;
- int err = 0;
-
- /* save mtrr&pat */
- for_each_vcpu(d, v)
- {
- err = hvm_save_mtrr_msr_one(v, h);
- if ( err )
- break;
- }
- return err;
-}
-
static int hvm_load_mtrr_msr(struct domain *d, hvm_domain_context_t *h)
{
int vcpuid, i;
@@ -819,7 +804,7 @@ static int hvm_load_mtrr_msr(struct domain *d, hvm_domain_context_t *h)
return 0;
}
-HVM_REGISTER_SAVE_RESTORE(MTRR, hvm_save_mtrr_msr, hvm_save_mtrr_msr_one,
+HVM_REGISTER_SAVE_RESTORE(MTRR, hvm_save_mtrr_msr,
hvm_load_mtrr_msr, 1, HVMSR_PER_VCPU);
void memory_type_changed(struct domain *d)
diff --git a/xen/arch/x86/hvm/pmtimer.c b/xen/arch/x86/hvm/pmtimer.c
index 0a5e8ce..d8dcbc2 100644
--- a/xen/arch/x86/hvm/pmtimer.c
+++ b/xen/arch/x86/hvm/pmtimer.c
@@ -249,8 +249,9 @@ static int handle_pmt_io(
return X86EMUL_OKAY;
}
-static int acpi_save(struct domain *d, hvm_domain_context_t *h)
+static int acpi_save(struct vcpu *v, hvm_domain_context_t *h)
{
+ struct domain *d = v->domain;
struct hvm_hw_acpi *acpi = &d->arch.hvm_domain.acpi;
PMTState *s = &d->arch.hvm_domain.pl_time->vpmt;
uint32_t x, msb = acpi->tmr_val & TMR_VAL_MSB;
@@ -309,7 +310,7 @@ static int acpi_load(struct domain *d, hvm_domain_context_t *h)
return 0;
}
-HVM_REGISTER_SAVE_RESTORE(PMTIMER, acpi_save, NULL, acpi_load,
+HVM_REGISTER_SAVE_RESTORE(PMTIMER, acpi_save, acpi_load,
1, HVMSR_PER_DOM);
int pmtimer_change_ioport(struct domain *d, unsigned int version)
diff --git a/xen/arch/x86/hvm/rtc.c b/xen/arch/x86/hvm/rtc.c
index ce7e71b..58b70fc 100644
--- a/xen/arch/x86/hvm/rtc.c
+++ b/xen/arch/x86/hvm/rtc.c
@@ -737,8 +737,9 @@ void rtc_migrate_timers(struct vcpu *v)
}
/* Save RTC hardware state */
-static int rtc_save(struct domain *d, hvm_domain_context_t *h)
+static int rtc_save(struct vcpu *v, hvm_domain_context_t *h)
{
+ struct domain *d = v->domain;
RTCState *s = domain_vrtc(d);
int rc;
@@ -783,7 +784,7 @@ static int rtc_load(struct domain *d, hvm_domain_context_t *h)
return 0;
}
-HVM_REGISTER_SAVE_RESTORE(RTC, rtc_save, NULL, rtc_load, 1, HVMSR_PER_DOM);
+HVM_REGISTER_SAVE_RESTORE(RTC, rtc_save, rtc_load, 1, HVMSR_PER_DOM);
void rtc_reset(struct domain *d)
{
diff --git a/xen/arch/x86/hvm/save.c b/xen/arch/x86/hvm/save.c
index 363695c..43eb582 100644
--- a/xen/arch/x86/hvm/save.c
+++ b/xen/arch/x86/hvm/save.c
@@ -85,7 +85,6 @@ int arch_hvm_load(struct domain *d, struct hvm_save_header *hdr)
/* List of handlers for various HVM save and restore types */
static struct {
hvm_save_handler save;
- hvm_save_vcpu_handler save_one;
hvm_load_handler load;
const char *name;
size_t size;
@@ -96,7 +95,6 @@ static struct {
void __init hvm_register_savevm(uint16_t typecode,
const char *name,
hvm_save_handler save_state,
- hvm_save_vcpu_handler save_one,
hvm_load_handler load_state,
size_t size, int kind)
{
@@ -104,7 +102,6 @@ void __init hvm_register_savevm(uint16_t typecode,
ASSERT(hvm_sr_handlers[typecode].save == NULL);
ASSERT(hvm_sr_handlers[typecode].load == NULL);
hvm_sr_handlers[typecode].save = save_state;
- hvm_sr_handlers[typecode].save_one = save_one;
hvm_sr_handlers[typecode].load = load_state;
hvm_sr_handlers[typecode].name = name;
hvm_sr_handlers[typecode].size = size;
@@ -155,7 +152,7 @@ int hvm_save_one(struct domain *d, unsigned int typecode, unsigned int instance,
if ( !ctxt.data )
return -ENOMEM;
- if ( (rv = hvm_sr_handlers[typecode].save(d, &ctxt)) != 0 )
+ if ( (rv = hvm_sr_handlers[typecode].save(d->vcpu[instance], &ctxt)) != 0 )
printk(XENLOG_G_ERR "HVM%d save: failed to save type %"PRIu16" (%d)\n",
d->domain_id, typecode, rv);
else if ( rv = -ENOENT, ctxt.cur >= sizeof(*desc) )
@@ -195,7 +192,7 @@ int hvm_save(struct domain *d, hvm_domain_context_t *h)
char *c;
struct hvm_save_header hdr;
struct hvm_save_end end;
- hvm_save_vcpu_handler handler;
+ hvm_save_handler handler;
unsigned int i;
int rc;
struct vcpu *v;
@@ -225,7 +222,7 @@ int hvm_save(struct domain *d, hvm_domain_context_t *h)
/* Save all available kinds of state */
for ( i = 0; i <= HVM_SAVE_CODE_MAX; i++ )
{
- handler = hvm_sr_handlers[i].save_one;
+ handler = hvm_sr_handlers[i].save;
if ( handler != NULL )
{
for_each_vcpu ( d, v )
diff --git a/xen/arch/x86/hvm/vioapic.c b/xen/arch/x86/hvm/vioapic.c
index 66f54e4..86d02cf 100644
--- a/xen/arch/x86/hvm/vioapic.c
+++ b/xen/arch/x86/hvm/vioapic.c
@@ -569,8 +569,9 @@ int vioapic_get_trigger_mode(const struct domain *d, unsigned int gsi)
return vioapic->redirtbl[pin].fields.trig_mode;
}
-static int ioapic_save(struct domain *d, hvm_domain_context_t *h)
+static int ioapic_save(struct vcpu *v, hvm_domain_context_t *h)
{
+ struct domain *d = v->domain;
struct hvm_vioapic *s;
if ( !has_vioapic(d) )
@@ -601,7 +602,7 @@ static int ioapic_load(struct domain *d, hvm_domain_context_t *h)
return hvm_load_entry(IOAPIC, h, &s->domU);
}
-HVM_REGISTER_SAVE_RESTORE(IOAPIC, ioapic_save, NULL, ioapic_load, 1, HVMSR_PER_DOM);
+HVM_REGISTER_SAVE_RESTORE(IOAPIC, ioapic_save, ioapic_load, 1, HVMSR_PER_DOM);
void vioapic_reset(struct domain *d)
{
diff --git a/xen/arch/x86/hvm/viridian.c b/xen/arch/x86/hvm/viridian.c
index 268ccce..cc37ab4 100644
--- a/xen/arch/x86/hvm/viridian.c
+++ b/xen/arch/x86/hvm/viridian.c
@@ -990,8 +990,9 @@ out:
return HVM_HCALL_completed;
}
-static int viridian_save_domain_ctxt(struct domain *d, hvm_domain_context_t *h)
+static int viridian_save_domain_ctxt(struct vcpu *v, hvm_domain_context_t *h)
{
+ struct domain *d = v->domain;
struct hvm_viridian_domain_context ctxt = {
.time_ref_count = d->arch.hvm_domain.viridian.time_ref_count.val,
.hypercall_gpa = d->arch.hvm_domain.viridian.hypercall_gpa.raw,
@@ -1023,10 +1024,10 @@ static int viridian_load_domain_ctxt(struct domain *d, hvm_domain_context_t *h)
return 0;
}
-HVM_REGISTER_SAVE_RESTORE(VIRIDIAN_DOMAIN, viridian_save_domain_ctxt, NULL,
+HVM_REGISTER_SAVE_RESTORE(VIRIDIAN_DOMAIN, viridian_save_domain_ctxt,
viridian_load_domain_ctxt, 1, HVMSR_PER_DOM);
-static int viridian_save_vcpu_ctxt_one(struct vcpu *v, hvm_domain_context_t *h)
+static int viridian_save_vcpu_ctxt(struct vcpu *v, hvm_domain_context_t *h)
{
struct hvm_viridian_vcpu_context ctxt = {
.vp_assist_msr = v->arch.hvm_vcpu.viridian.vp_assist.msr.raw,
@@ -1039,21 +1040,6 @@ static int viridian_save_vcpu_ctxt_one(struct vcpu *v, hvm_domain_context_t *h)
return hvm_save_entry(VIRIDIAN_VCPU, v->vcpu_id, h, &ctxt);
}
-static int viridian_save_vcpu_ctxt(struct domain *d, hvm_domain_context_t *h)
-{
- struct vcpu *v;
- int err = 0;
-
- for_each_vcpu ( d, v )
- {
- err = viridian_save_vcpu_ctxt_one(v, h);
- if ( err )
- break;
- }
-
- return err;
-}
-
static int viridian_load_vcpu_ctxt(struct domain *d, hvm_domain_context_t *h)
{
int vcpuid;
@@ -1085,7 +1071,6 @@ static int viridian_load_vcpu_ctxt(struct domain *d, hvm_domain_context_t *h)
}
HVM_REGISTER_SAVE_RESTORE(VIRIDIAN_VCPU, viridian_save_vcpu_ctxt,
- viridian_save_vcpu_ctxt_one,
viridian_load_vcpu_ctxt, 1, HVMSR_PER_VCPU);
static int __init parse_viridian_version(const char *arg)
diff --git a/xen/arch/x86/hvm/vlapic.c b/xen/arch/x86/hvm/vlapic.c
index 669075e..e9eed28 100644
--- a/xen/arch/x86/hvm/vlapic.c
+++ b/xen/arch/x86/hvm/vlapic.c
@@ -1435,7 +1435,7 @@ static void lapic_rearm(struct vlapic *s)
s->timer_last_update = s->pt.last_plt_gtime;
}
-static int lapic_save_hidden_one(struct vcpu *v, hvm_domain_context_t *h)
+static int lapic_save_hidden(struct vcpu *v, hvm_domain_context_t *h)
{
struct vlapic *s = vcpu_vlapic(v);
@@ -1445,22 +1445,7 @@ static int lapic_save_hidden_one(struct vcpu *v, hvm_domain_context_t *h)
return hvm_save_entry(LAPIC, v->vcpu_id, h, &s->hw);
}
-static int lapic_save_hidden(struct domain *d, hvm_domain_context_t *h)
-{
- struct vcpu *v;
- int err = 0;
-
- for_each_vcpu ( d, v )
- {
- err = lapic_save_hidden_one(v, h);
- if ( err )
- break;
- }
-
- return err;
-}
-
-static int lapic_save_regs_one(struct vcpu *v, hvm_domain_context_t *h)
+static int lapic_save_regs(struct vcpu *v, hvm_domain_context_t *h)
{
struct vlapic *s;
@@ -1475,24 +1460,6 @@ static int lapic_save_regs_one(struct vcpu *v, hvm_domain_context_t *h)
return hvm_save_entry(LAPIC_REGS, v->vcpu_id, h, s->regs);
}
-static int lapic_save_regs(struct domain *d, hvm_domain_context_t *h)
-{
- struct vcpu *v;
- int err = 0;
-
- if ( !has_vlapic(d) )
- return 0;
-
- for_each_vcpu ( d, v )
- {
- err = lapic_save_regs_one(v, h);
- if ( err )
- break;
- }
-
- return err;
-}
-
/*
* Following lapic_load_hidden()/lapic_load_regs() we may need to
* correct ID and LDR when they come from an old, broken hypervisor.
@@ -1593,9 +1560,9 @@ static int lapic_load_regs(struct domain *d, hvm_domain_context_t *h)
return 0;
}
-HVM_REGISTER_SAVE_RESTORE(LAPIC, lapic_save_hidden, lapic_save_hidden_one, lapic_load_hidden,
+HVM_REGISTER_SAVE_RESTORE(LAPIC, lapic_save_hidden, lapic_load_hidden,
1, HVMSR_PER_VCPU);
-HVM_REGISTER_SAVE_RESTORE(LAPIC_REGS, lapic_save_regs, lapic_save_regs_one, lapic_load_regs,
+HVM_REGISTER_SAVE_RESTORE(LAPIC_REGS, lapic_save_regs, lapic_load_regs,
1, HVMSR_PER_VCPU);
int vlapic_init(struct vcpu *v)
diff --git a/xen/arch/x86/hvm/vpic.c b/xen/arch/x86/hvm/vpic.c
index ca9b4cb..bad5066 100644
--- a/xen/arch/x86/hvm/vpic.c
+++ b/xen/arch/x86/hvm/vpic.c
@@ -371,8 +371,9 @@ static int vpic_intercept_elcr_io(
return X86EMUL_OKAY;
}
-static int vpic_save(struct domain *d, hvm_domain_context_t *h)
+static int vpic_save(struct vcpu *v, hvm_domain_context_t *h)
{
+ struct domain *d = v->domain;
struct hvm_hw_vpic *s;
int i;
@@ -411,7 +412,7 @@ static int vpic_load(struct domain *d, hvm_domain_context_t *h)
return 0;
}
-HVM_REGISTER_SAVE_RESTORE(PIC, vpic_save, NULL, vpic_load, 2, HVMSR_PER_DOM);
+HVM_REGISTER_SAVE_RESTORE(PIC, vpic_save, vpic_load, 2, HVMSR_PER_DOM);
void vpic_reset(struct domain *d)
{
diff --git a/xen/include/asm-x86/hvm/save.h b/xen/include/asm-x86/hvm/save.h
index f2283fc..b867027 100644
--- a/xen/include/asm-x86/hvm/save.h
+++ b/xen/include/asm-x86/hvm/save.h
@@ -95,10 +95,8 @@ static inline uint16_t hvm_load_instance(struct hvm_domain_context *h)
* The save handler may save multiple instances of a type into the buffer;
* the load handler will be called once for each instance found when
* restoring. Both return non-zero on error. */
-typedef int (*hvm_save_handler) (struct domain *d,
+typedef int (*hvm_save_handler) (struct vcpu *v,
hvm_domain_context_t *h);
-typedef int (*hvm_save_vcpu_handler)(struct vcpu *v,
- hvm_domain_context_t *h);
typedef int (*hvm_load_handler) (struct domain *d,
hvm_domain_context_t *h);
@@ -107,7 +105,6 @@ typedef int (*hvm_load_handler) (struct domain *d,
void hvm_register_savevm(uint16_t typecode,
const char *name,
hvm_save_handler save_state,
- hvm_save_vcpu_handler save_one,
hvm_load_handler load_state,
size_t size, int kind);
@@ -117,13 +114,12 @@ void hvm_register_savevm(uint16_t typecode,
/* Syntactic sugar around that function: specify the max number of
* saves, and this calculates the size of buffer needed */
-#define HVM_REGISTER_SAVE_RESTORE(_x, _save, _save_one, _load, _num, _k) \
+#define HVM_REGISTER_SAVE_RESTORE(_x, _save, _load, _num, _k) \
static int __init __hvm_register_##_x##_save_and_restore(void) \
{ \
hvm_register_savevm(HVM_SAVE_CODE(_x), \
#_x, \
&_save, \
- _save_one, \
&_load, \
(_num) * (HVM_SAVE_LENGTH(_x) \
+ sizeof (struct hvm_save_descriptor)), \
--
2.7.4
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
^ permalink raw reply related [flat|nested] 26+ messages in thread
* [PATCH v15 14/14] x86/domctl: Don't pause the whole domain if only getting vcpu state
2018-08-03 13:53 [PATCH v15 00/14] x86/domctl: Save info for one vcpu instance Alexandru Isaila
` (12 preceding siblings ...)
2018-08-03 13:53 ` [PATCH v15 13/14] x86/hvm: Remove redundant " Alexandru Isaila
@ 2018-08-03 13:53 ` Alexandru Isaila
2018-08-07 12:58 ` Jan Beulich
2018-08-07 12:59 ` [PATCH v15 00/14] x86/domctl: Save info for one vcpu instance Jan Beulich
14 siblings, 1 reply; 26+ messages in thread
From: Alexandru Isaila @ 2018-08-03 13:53 UTC (permalink / raw)
To: xen-devel
Cc: wei.liu2, andrew.cooper3, ian.jackson, paul.durrant, jbeulich,
Alexandru Isaila
This patch is focused on moving changing hvm_save_one() to save one
typecode from one vcpu and now that the save functions get data from a
single vcpu we can pause the specific vcpu instead of the domain.
Signed-off-by: Alexandru Isaila <aisaila@bitdefender.com>
---
xen/arch/x86/domctl.c | 4 ++--
xen/arch/x86/hvm/save.c | 41 +++++++++++++++++------------------------
2 files changed, 19 insertions(+), 26 deletions(-)
diff --git a/xen/arch/x86/domctl.c b/xen/arch/x86/domctl.c
index 8fbbf3a..bd6ba62 100644
--- a/xen/arch/x86/domctl.c
+++ b/xen/arch/x86/domctl.c
@@ -591,12 +591,12 @@ long arch_do_domctl(
!is_hvm_domain(d) )
break;
- domain_pause(d);
+ vcpu_pause(d->vcpu[domctl->u.hvmcontext_partial.instance]);
ret = hvm_save_one(d, domctl->u.hvmcontext_partial.type,
domctl->u.hvmcontext_partial.instance,
domctl->u.hvmcontext_partial.buffer,
&domctl->u.hvmcontext_partial.bufsz);
- domain_unpause(d);
+ vcpu_unpause(d->vcpu[domctl->u.hvmcontext_partial.instance]);
if ( !ret )
copyback = true;
diff --git a/xen/arch/x86/hvm/save.c b/xen/arch/x86/hvm/save.c
index 43eb582..28f3b57 100644
--- a/xen/arch/x86/hvm/save.c
+++ b/xen/arch/x86/hvm/save.c
@@ -138,6 +138,7 @@ int hvm_save_one(struct domain *d, unsigned int typecode, unsigned int instance,
int rv;
hvm_domain_context_t ctxt = { };
const struct hvm_save_descriptor *desc;
+ uint32_t off = 0;
if ( d->is_dying ||
typecode > HVM_SAVE_CODE_MAX ||
@@ -146,8 +147,6 @@ int hvm_save_one(struct domain *d, unsigned int typecode, unsigned int instance,
return -EINVAL;
ctxt.size = hvm_sr_handlers[typecode].size;
- if ( hvm_sr_handlers[typecode].kind == HVMSR_PER_VCPU )
- ctxt.size *= d->max_vcpus;
ctxt.data = xmalloc_bytes(ctxt.size);
if ( !ctxt.data )
return -ENOMEM;
@@ -157,29 +156,23 @@ int hvm_save_one(struct domain *d, unsigned int typecode, unsigned int instance,
d->domain_id, typecode, rv);
else if ( rv = -ENOENT, ctxt.cur >= sizeof(*desc) )
{
- uint32_t off;
-
- for ( off = 0; off <= (ctxt.cur - sizeof(*desc)); off += desc->length )
+ desc = (void *)(ctxt.data + off);
+ /* Move past header */
+ off += sizeof(*desc);
+ if ( ctxt.cur < desc->length ||
+ off > ctxt.cur - desc->length )
+ rv = -EFAULT;
+ if ( instance == desc->instance )
{
- desc = (void *)(ctxt.data + off);
- /* Move past header */
- off += sizeof(*desc);
- if ( ctxt.cur < desc->length ||
- off > ctxt.cur - desc->length )
- break;
- if ( instance == desc->instance )
- {
- rv = 0;
- if ( guest_handle_is_null(handle) )
- *bufsz = desc->length;
- else if ( *bufsz < desc->length )
- rv = -ENOBUFS;
- else if ( copy_to_guest(handle, ctxt.data + off, desc->length) )
- rv = -EFAULT;
- else
- *bufsz = desc->length;
- break;
- }
+ rv = 0;
+ if ( guest_handle_is_null(handle) )
+ *bufsz = desc->length;
+ else if ( *bufsz < desc->length )
+ rv = -ENOBUFS;
+ else if ( copy_to_guest(handle, ctxt.data + off, desc->length) )
+ rv = -EFAULT;
+ else
+ *bufsz = desc->length;
}
}
--
2.7.4
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
^ permalink raw reply related [flat|nested] 26+ messages in thread
* Re: [PATCH v15 06/14] x86/hvm: Introduce hvm_save_mtrr_msr_one func
2018-08-03 13:53 ` [PATCH v15 06/14] x86/hvm: Introduce hvm_save_mtrr_msr_one func Alexandru Isaila
@ 2018-08-07 12:00 ` Jan Beulich
2018-08-07 15:02 ` Isaila Alexandru
0 siblings, 1 reply; 26+ messages in thread
From: Jan Beulich @ 2018-08-07 12:00 UTC (permalink / raw)
To: aisaila; +Cc: Andrew Cooper, Paul Durrant, Wei Liu, Ian Jackson, xen-devel
>>> On 03.08.18 at 15:53, <aisaila@bitdefender.com> wrote:
> + for ( i = 0; i < MASK_EXTR(hw_mtrr.msr_mtrr_cap, MTRRcap_VCNT); i++ )
> + {
> + /* save physbase */
> + hw_mtrr.msr_mtrr_var[i * 2] = mtrr_state->var_ranges->base;
> + /* save physmask */
> + hw_mtrr.msr_mtrr_var[i * 2 + 1] = mtrr_state->var_ranges->mask;
> + }
One of the intended side effects of using structure field on the rhs
was to be able to drop the (now redundant) comments.
> - hvm_get_guest_pat(v, &hw_mtrr.msr_pat_cr);
> + memcpy(hw_mtrr.msr_mtrr_fixed, mtrr_state->fixed_ranges, NUM_FIXED_MSR);
You want to BUILD_BUG_ON() array sizes differing, and then use
sizeof() in the call to memcpy().
Jan
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [PATCH v15 09/14] x86/hvm: Introduce lapic_save_regs_one func
2018-08-03 13:53 ` [PATCH v15 09/14] x86/hvm: Introduce lapic_save_regs_one func Alexandru Isaila
@ 2018-08-07 12:09 ` Jan Beulich
2018-08-07 12:37 ` Isaila Alexandru
0 siblings, 1 reply; 26+ messages in thread
From: Jan Beulich @ 2018-08-07 12:09 UTC (permalink / raw)
To: aisaila; +Cc: Andrew Cooper, Paul Durrant, Wei Liu, Ian Jackson, xen-devel
>>> On 03.08.18 at 15:53, <aisaila@bitdefender.com> wrote:
> This is used to save data from a single instance.
>
> Signed-off-by: Alexandru Isaila <aisaila@bitdefender.com>
> ---
> xen/arch/x86/hvm/vlapic.c | 27 +++++++++++++++++++--------
> 1 file changed, 19 insertions(+), 8 deletions(-)
>
> diff --git a/xen/arch/x86/hvm/vlapic.c b/xen/arch/x86/hvm/vlapic.c
> index 0795161..d35810e 100644
> --- a/xen/arch/x86/hvm/vlapic.c
> +++ b/xen/arch/x86/hvm/vlapic.c
> @@ -1460,26 +1460,37 @@ static int lapic_save_hidden(struct domain *d,
> hvm_domain_context_t *h)
> return err;
> }
>
> +static int lapic_save_regs_one(struct vcpu *v, hvm_domain_context_t *h)
> +{
> + struct vlapic *s;
> +
> + if ( !has_vlapic(v->domain) )
> + return 0;
> +
> + if ( hvm_funcs.sync_pir_to_irr )
> + hvm_funcs.sync_pir_to_irr(v);
> +
> + s = vcpu_vlapic(v);
> +
> + return hvm_save_entry(LAPIC_REGS, v->vcpu_id, h, s->regs);
> +}
Here as well as in patch 8 there's little point in having a local variable s
which is used just once. If you really think you want to retain them,
here it can be pointer to const (other than in patch 8 afaict), and like
in patch 8 it could have an initializer instead of later having a separate
assignment statement.
> static int lapic_save_regs(struct domain *d, hvm_domain_context_t *h)
> {
> struct vcpu *v;
> - struct vlapic *s;
> - int rc = 0;
> + int err = 0;
>
> if ( !has_vlapic(d) )
> return 0;
>
> for_each_vcpu ( d, v )
> {
> - if ( hvm_funcs.sync_pir_to_irr )
> - hvm_funcs.sync_pir_to_irr(v);
> -
> - s = vcpu_vlapic(v);
> - if ( (rc = hvm_save_entry(LAPIC_REGS, v->vcpu_id, h, s->regs)) != 0 )
> + err = lapic_save_regs_one(v, h);
> + if ( err )
> break;
> }
>
> - return rc;
> + return err;
> }
Since the whole function is meant to go away anyway, it doesn't
matter much, but why did you see a need to replace "rc" by "err"?
This only increases code churn (even if just slightly). IOW: No
need to change this, but something to consider in the future.
Jan
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [PATCH v15 11/14] x86/domctl: Use hvm_save_vcpu_handler
2018-08-03 13:53 ` [PATCH v15 11/14] x86/domctl: Use hvm_save_vcpu_handler Alexandru Isaila
@ 2018-08-07 12:25 ` Jan Beulich
0 siblings, 0 replies; 26+ messages in thread
From: Jan Beulich @ 2018-08-07 12:25 UTC (permalink / raw)
To: aisaila; +Cc: Andrew Cooper, Paul Durrant, Wei Liu, Ian Jackson, xen-devel
>>> On 03.08.18 at 15:53, <aisaila@bitdefender.com> wrote:
> --- a/xen/arch/x86/hvm/save.c
> +++ b/xen/arch/x86/hvm/save.c
> @@ -196,7 +196,10 @@ int hvm_save(struct domain *d, hvm_domain_context_t *h)
> struct hvm_save_header hdr;
> struct hvm_save_end end;
> hvm_save_handler handler;
> + hvm_save_vcpu_handler save_one_handler;
> unsigned int i;
> + int rc;
> + struct vcpu *v;
Please move the declarations you add into the scopes where they're
actually needed (but please avoid replicating rc). I realize pre-existing
code isn't in line with this, but please le's not widen the problem. In
fact I wouldn't mind at all if you moved handler down right away. But
as that's slated to go away, that's probably not very important.
> @@ -224,11 +227,32 @@ int hvm_save(struct domain *d, hvm_domain_context_t *h)
> for ( i = 0; i <= HVM_SAVE_CODE_MAX; i++ )
> {
> handler = hvm_sr_handlers[i].save;
> - if ( handler != NULL )
> + save_one_handler = hvm_sr_handlers[i].save_one;
> + if ( save_one_handler != NULL )
Would you mind omitting the redundant "!= NULL" here and below?
> + {
> + for_each_vcpu ( d, v )
> + {
> + printk(XENLOG_G_INFO "HVM %pv save: %s\n",
> + v, hvm_sr_handlers[i].name);
> + rc = save_one_handler(v, h);
> +
> + if( rc != 0 )
Missing blank, and just like above "!= 0" is redundant and could be
omitted (same below).
> + {
> + printk(XENLOG_G_ERR
> + "HVM %pv save: failed to save type %"PRIu16"\n",
> + v, i);
> + return -EFAULT;
Why -EFAULT? The pre-existing bad use does not count as an excuse.
If the value of rc can't be used (perhaps because there may be positive
values or -1 coming back), pick something that at least comes a little
closer to representing the actual condition (EIO, ENODATA, EOPNOTSUPP
all come to mind, but much depends on what conditions actually exist).
I'd then encourage you to also change the pre-existing bad use.
> + }
> + }
> + }
> + else if ( handler != NULL )
> {
> printk(XENLOG_G_INFO "HVM%d save: %s\n",
> d->domain_id, hvm_sr_handlers[i].name);
> - if ( handler(d, h) != 0 )
> +
> + rc = handler(d, h);
> +
> + if( rc != 0 )
Please either omit the blank line ahead of the invocation of handler(),
or the one following it. First and foremost: Have this block be
consistent blank line wise with the one above.
Jan
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [PATCH v15 12/14] x86/hvm: Drop the use of save functions
2018-08-03 13:53 ` [PATCH v15 12/14] x86/hvm: Drop the use of save functions Alexandru Isaila
@ 2018-08-07 12:28 ` Jan Beulich
2018-08-07 12:41 ` Jan Beulich
1 sibling, 0 replies; 26+ messages in thread
From: Jan Beulich @ 2018-08-07 12:28 UTC (permalink / raw)
To: aisaila; +Cc: Andrew Cooper, Paul Durrant, Wei Liu, Ian Jackson, xen-devel
>>> On 03.08.18 at 15:53, <aisaila@bitdefender.com> wrote:
> This patch drops the use of save functions in hvm_save.
But quite a few types still have this set to NULL? How do things work
at this point of the series? Am I overlooking anything? I think this
needs to be swapped with patch 13.
Jan
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [PATCH v15 09/14] x86/hvm: Introduce lapic_save_regs_one func
2018-08-07 12:09 ` Jan Beulich
@ 2018-08-07 12:37 ` Isaila Alexandru
0 siblings, 0 replies; 26+ messages in thread
From: Isaila Alexandru @ 2018-08-07 12:37 UTC (permalink / raw)
To: Jan Beulich; +Cc: Andrew Cooper, Paul Durrant, Wei Liu, Ian Jackson, xen-devel
On Ma, 2018-08-07 at 06:09 -0600, Jan Beulich wrote:
> >
> > >
> > > >
> > > > On 03.08.18 at 15:53, <aisaila@bitdefender.com> wrote:
> > This is used to save data from a single instance.
> >
> > Signed-off-by: Alexandru Isaila <aisaila@bitdefender.com>
> > ---
> > xen/arch/x86/hvm/vlapic.c | 27 +++++++++++++++++++--------
> > 1 file changed, 19 insertions(+), 8 deletions(-)
> >
> > diff --git a/xen/arch/x86/hvm/vlapic.c b/xen/arch/x86/hvm/vlapic.c
> > index 0795161..d35810e 100644
> > --- a/xen/arch/x86/hvm/vlapic.c
> > +++ b/xen/arch/x86/hvm/vlapic.c
> > @@ -1460,26 +1460,37 @@ static int lapic_save_hidden(struct domain
> > *d,
> > hvm_domain_context_t *h)
> > return err;
> > }
> >
> > +static int lapic_save_regs_one(struct vcpu *v,
> > hvm_domain_context_t *h)
> > +{
> > + struct vlapic *s;
> > +
> > + if ( !has_vlapic(v->domain) )
> > + return 0;
> > +
> > + if ( hvm_funcs.sync_pir_to_irr )
> > + hvm_funcs.sync_pir_to_irr(v);
> > +
> > + s = vcpu_vlapic(v);
> > +
> > + return hvm_save_entry(LAPIC_REGS, v->vcpu_id, h, s->regs);
> > +}
> Here as well as in patch 8 there's little point in having a local
> variable s
> which is used just once. If you really think you want to retain them,
> here it can be pointer to const (other than in patch 8 afaict), and
> like
> in patch 8 it could have an initializer instead of later having a
> separate
> assignment statement.
>
> >
> > static int lapic_save_regs(struct domain *d, hvm_domain_context_t
> > *h)
> > {
> > struct vcpu *v;
> > - struct vlapic *s;
> > - int rc = 0;
> > + int err = 0;
> >
> > if ( !has_vlapic(d) )
> > return 0;
> >
> > for_each_vcpu ( d, v )
> > {
> > - if ( hvm_funcs.sync_pir_to_irr )
> > - hvm_funcs.sync_pir_to_irr(v);
> > -
> > - s = vcpu_vlapic(v);
> > - if ( (rc = hvm_save_entry(LAPIC_REGS, v->vcpu_id, h, s-
> > >regs)) != 0 )
> > + err = lapic_save_regs_one(v, h);
> > + if ( err )
> > break;
> > }
> >
> > - return rc;
> > + return err;
> > }
> Since the whole function is meant to go away anyway, it doesn't
> matter much, but why did you see a need to replace "rc" by "err"?
> This only increases code churn (even if just slightly). IOW: No
> need to change this, but something to consider in the future.
>
Err was just to have all the functions work with the same variable name
so this was done just for consistency.
Alex
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [PATCH v15 12/14] x86/hvm: Drop the use of save functions
2018-08-03 13:53 ` [PATCH v15 12/14] x86/hvm: Drop the use of save functions Alexandru Isaila
2018-08-07 12:28 ` Jan Beulich
@ 2018-08-07 12:41 ` Jan Beulich
1 sibling, 0 replies; 26+ messages in thread
From: Jan Beulich @ 2018-08-07 12:41 UTC (permalink / raw)
To: aisaila; +Cc: Andrew Cooper, Paul Durrant, Wei Liu, Ian Jackson, xen-devel
>>> On 03.08.18 at 15:53, <aisaila@bitdefender.com> wrote:
> @@ -226,15 +225,14 @@ int hvm_save(struct domain *d, hvm_domain_context_t *h)
> /* Save all available kinds of state */
> for ( i = 0; i <= HVM_SAVE_CODE_MAX; i++ )
> {
> - handler = hvm_sr_handlers[i].save;
> - save_one_handler = hvm_sr_handlers[i].save_one;
> - if ( save_one_handler != NULL )
> + handler = hvm_sr_handlers[i].save_one;
> + if ( handler != NULL )
> {
> for_each_vcpu ( d, v )
> {
> printk(XENLOG_G_INFO "HVM %pv save: %s\n",
> v, hvm_sr_handlers[i].name);
> - rc = save_one_handler(v, h);
> + rc = handler(v, h);
As already said on v14: You must not invoke the handler once per
vCPU for HVMSR_PER_DOM type records.
Jan
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [PATCH v15 13/14] x86/hvm: Remove redundant save functions
2018-08-03 13:53 ` [PATCH v15 13/14] x86/hvm: Remove redundant " Alexandru Isaila
@ 2018-08-07 12:47 ` Jan Beulich
0 siblings, 0 replies; 26+ messages in thread
From: Jan Beulich @ 2018-08-07 12:47 UTC (permalink / raw)
To: aisaila; +Cc: Andrew Cooper, Paul Durrant, Wei Liu, Ian Jackson, xen-devel
>>> On 03.08.18 at 15:53, <aisaila@bitdefender.com> wrote:
> @@ -155,7 +152,7 @@ int hvm_save_one(struct domain *d, unsigned int typecode, unsigned int instance,
> if ( !ctxt.data )
> return -ENOMEM;
>
> - if ( (rv = hvm_sr_handlers[typecode].save(d, &ctxt)) != 0 )
> + if ( (rv = hvm_sr_handlers[typecode].save(d->vcpu[instance], &ctxt)) != 0 )
There is no bounds check whatsoever before this use of instance as an
array index. You want to check against vCPU count for HVMSR_PER_VCPU
records, and pass vCPU0 for HVMSR_PER_DOM ones. I'm relatively sure
I've said so already on an earlier iteration of the series.
> @@ -107,7 +105,6 @@ typedef int (*hvm_load_handler) (struct domain *d,
> void hvm_register_savevm(uint16_t typecode,
> const char *name,
> hvm_save_handler save_state,
> - hvm_save_vcpu_handler save_one,
> hvm_load_handler load_state,
> size_t size, int kind);
>
> @@ -117,13 +114,12 @@ void hvm_register_savevm(uint16_t typecode,
>
> /* Syntactic sugar around that function: specify the max number of
> * saves, and this calculates the size of buffer needed */
> -#define HVM_REGISTER_SAVE_RESTORE(_x, _save, _save_one, _load, _num, _k) \
> +#define HVM_REGISTER_SAVE_RESTORE(_x, _save, _load, _num, _k) \
> static int __init __hvm_register_##_x##_save_and_restore(void) \
> { \
> hvm_register_savevm(HVM_SAVE_CODE(_x), \
> #_x, \
> &_save, \
> - _save_one, \
> &_load, \
> (_num) * (HVM_SAVE_LENGTH(_x) \
> + sizeof (struct hvm_save_descriptor)), \
As to patch splitting: One option looks to be to fold 12 and 13, but
that would make for a pretty big patch doing at least two things at
the same time. Another option could be to simply store NULL here
into the now unused field in order to have what is now patch 12 in
the _next_ step remove that field (and its use in hvm_save()) and
do the renaming (i.e. the dropping of _one).
Jan
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [PATCH v15 14/14] x86/domctl: Don't pause the whole domain if only getting vcpu state
2018-08-03 13:53 ` [PATCH v15 14/14] x86/domctl: Don't pause the whole domain if only getting vcpu state Alexandru Isaila
@ 2018-08-07 12:58 ` Jan Beulich
0 siblings, 0 replies; 26+ messages in thread
From: Jan Beulich @ 2018-08-07 12:58 UTC (permalink / raw)
To: aisaila; +Cc: Andrew Cooper, Paul Durrant, Wei Liu, Ian Jackson, xen-devel
>>> On 03.08.18 at 15:53, <aisaila@bitdefender.com> wrote:
> --- a/xen/arch/x86/domctl.c
> +++ b/xen/arch/x86/domctl.c
> @@ -591,12 +591,12 @@ long arch_do_domctl(
> !is_hvm_domain(d) )
> break;
>
> - domain_pause(d);
> + vcpu_pause(d->vcpu[domctl->u.hvmcontext_partial.instance]);
> ret = hvm_save_one(d, domctl->u.hvmcontext_partial.type,
> domctl->u.hvmcontext_partial.instance,
> domctl->u.hvmcontext_partial.buffer,
> &domctl->u.hvmcontext_partial.bufsz);
> - domain_unpause(d);
> + vcpu_unpause(d->vcpu[domctl->u.hvmcontext_partial.instance]);
Same issue here - there's no bounds check of
domctl->u.hvmcontext_partial.instance before its use as array index.
I'm afraid you can't do the pausing here anymore, both for this reason
and because you still need to pause the whole domain for HVMSR_PER_DOM
type records. Yet you'll know the type only inside hvm_save_one().
> --- a/xen/arch/x86/hvm/save.c
> +++ b/xen/arch/x86/hvm/save.c
> @@ -138,6 +138,7 @@ int hvm_save_one(struct domain *d, unsigned int typecode, unsigned int instance,
> int rv;
> hvm_domain_context_t ctxt = { };
> const struct hvm_save_descriptor *desc;
> + uint32_t off = 0;
Why does this get moved here?
> @@ -157,29 +156,23 @@ int hvm_save_one(struct domain *d, unsigned int typecode, unsigned int instance,
> d->domain_id, typecode, rv);
> else if ( rv = -ENOENT, ctxt.cur >= sizeof(*desc) )
> {
> - uint32_t off;
> -
> - for ( off = 0; off <= (ctxt.cur - sizeof(*desc)); off += desc->length )
> + desc = (void *)(ctxt.data + off);
> + /* Move past header */
> + off += sizeof(*desc);
> + if ( ctxt.cur < desc->length ||
> + off > ctxt.cur - desc->length )
> + rv = -EFAULT;
> + if ( instance == desc->instance )
> {
> - desc = (void *)(ctxt.data + off);
> - /* Move past header */
> - off += sizeof(*desc);
> - if ( ctxt.cur < desc->length ||
> - off > ctxt.cur - desc->length )
> - break;
> - if ( instance == desc->instance )
> - {
> - rv = 0;
> - if ( guest_handle_is_null(handle) )
> - *bufsz = desc->length;
> - else if ( *bufsz < desc->length )
> - rv = -ENOBUFS;
> - else if ( copy_to_guest(handle, ctxt.data + off, desc->length) )
> - rv = -EFAULT;
> - else
> - *bufsz = desc->length;
> - break;
> - }
You can't just delete this loop - it's still needed for multi-instance
records which aren't per-vCPU (PIC is the only example right now
iirc). Since the instance is going to be the correct one for
HVMSR_PER_VCPU type records, can't you simply leave the code
here alone?
Jan
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [PATCH v15 00/14] x86/domctl: Save info for one vcpu instance
2018-08-03 13:53 [PATCH v15 00/14] x86/domctl: Save info for one vcpu instance Alexandru Isaila
` (13 preceding siblings ...)
2018-08-03 13:53 ` [PATCH v15 14/14] x86/domctl: Don't pause the whole domain if only getting vcpu state Alexandru Isaila
@ 2018-08-07 12:59 ` Jan Beulich
14 siblings, 0 replies; 26+ messages in thread
From: Jan Beulich @ 2018-08-07 12:59 UTC (permalink / raw)
To: aisaila; +Cc: Andrew Cooper, Paul Durrant, Wei Liu, Ian Jackson, xen-devel
>>> On 03.08.18 at 15:53, <aisaila@bitdefender.com> wrote:
> Alexandru Isaila (14):
>
> x86/cpu: Introduce vmce_save_vcpu_ctxt_one() func
> x86/hvm: Introduce hvm_save_tsc_adjust_one() func
> x86/hvm: Introduce hvm_save_cpu_ctxt_one func
> x86/hvm: Introduce hvm_save_cpu_xsave_states_one
> x86/hvm: Introduce hvm_save_cpu_msrs_one func
> x86/hvm: Introduce hvm_save_mtrr_msr_one func
> x86/hvm: Introduce viridian_save_vcpu_ctxt_one()
> x86/hvm: Introduce lapic_save_hidden_one
> x86/hvm: Introduce lapic_save_regs_one func
> x86/hvm: Add handler for save_one funcs
> x86/domctl: Use hvm_save_vcpu_handler
> x86/hvm: Drop the use of save functions
> x86/hvm: Remove redundant save functions
> x86/domctl: Don't pause the whole domain if only
Patches 1...5 and 10
Reviewed-by: Jan Beulich <jbeulich@suse.com>
Jan
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [PATCH v15 06/14] x86/hvm: Introduce hvm_save_mtrr_msr_one func
2018-08-07 12:00 ` Jan Beulich
@ 2018-08-07 15:02 ` Isaila Alexandru
2018-08-07 15:11 ` Jan Beulich
0 siblings, 1 reply; 26+ messages in thread
From: Isaila Alexandru @ 2018-08-07 15:02 UTC (permalink / raw)
To: Jan Beulich; +Cc: Andrew Cooper, Paul Durrant, Wei Liu, Ian Jackson, xen-devel
>
> >
> > - hvm_get_guest_pat(v, &hw_mtrr.msr_pat_cr);
> > + memcpy(hw_mtrr.msr_mtrr_fixed, mtrr_state->fixed_ranges,
> > NUM_FIXED_MSR);
> You want to BUILD_BUG_ON() array sizes differing, and then use
> sizeof() in the call to memcpy().
>
In this case sizes are different:
msr_mtrr_fixed[NUM_FIXED_MSR];
fixed_ranges[NUM_FIXED_RANGES];
#define NUM_FIXED_RANGES 88
#define NUM_FIXED_MSR 11
so it will most likely assert a message.
Alex
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [PATCH v15 06/14] x86/hvm: Introduce hvm_save_mtrr_msr_one func
2018-08-07 15:02 ` Isaila Alexandru
@ 2018-08-07 15:11 ` Jan Beulich
0 siblings, 0 replies; 26+ messages in thread
From: Jan Beulich @ 2018-08-07 15:11 UTC (permalink / raw)
To: aisaila; +Cc: Andrew Cooper, Paul Durrant, Wei Liu, Ian Jackson, xen-devel
>>> On 07.08.18 at 17:02, <aisaila@bitdefender.com> wrote:
>>
>> >
>> > - hvm_get_guest_pat(v, &hw_mtrr.msr_pat_cr);
>> > + memcpy(hw_mtrr.msr_mtrr_fixed, mtrr_state->fixed_ranges,
>> > NUM_FIXED_MSR);
>> You want to BUILD_BUG_ON() array sizes differing, and then use
>> sizeof() in the call to memcpy().
>>
> In this case sizes are different:
> msr_mtrr_fixed[NUM_FIXED_MSR];
> fixed_ranges[NUM_FIXED_RANGES];
> #define NUM_FIXED_RANGES 88
> #define NUM_FIXED_MSR 11
The base type of msr_mtrr_fixed[] is uint64_t, while fixed_ranges[]'s
is uint8_t. I had specifically used sizeof() in my previous reply (instead
of ARRAY_SIZE()) to avoid exactly this kind of confusion.
Jan
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
^ permalink raw reply [flat|nested] 26+ messages in thread
end of thread, other threads:[~2018-08-07 15:11 UTC | newest]
Thread overview: 26+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2018-08-03 13:53 [PATCH v15 00/14] x86/domctl: Save info for one vcpu instance Alexandru Isaila
2018-08-03 13:53 ` [PATCH v15 01/14] x86/cpu: Introduce vmce_save_vcpu_ctxt_one() func Alexandru Isaila
2018-08-03 13:53 ` [PATCH v15 02/14] x86/hvm: Introduce hvm_save_tsc_adjust_one() func Alexandru Isaila
2018-08-03 13:53 ` [PATCH v15 03/14] x86/hvm: Introduce hvm_save_cpu_ctxt_one func Alexandru Isaila
2018-08-03 13:53 ` [PATCH v15 04/14] x86/hvm: Introduce hvm_save_cpu_xsave_states_one Alexandru Isaila
2018-08-03 13:53 ` [PATCH v15 05/14] x86/hvm: Introduce hvm_save_cpu_msrs_one func Alexandru Isaila
2018-08-03 13:53 ` [PATCH v15 06/14] x86/hvm: Introduce hvm_save_mtrr_msr_one func Alexandru Isaila
2018-08-07 12:00 ` Jan Beulich
2018-08-07 15:02 ` Isaila Alexandru
2018-08-07 15:11 ` Jan Beulich
2018-08-03 13:53 ` [PATCH v15 07/14] x86/hvm: Introduce viridian_save_vcpu_ctxt_one() func Alexandru Isaila
2018-08-03 13:53 ` [PATCH v15 08/14] x86/hvm: Introduce lapic_save_hidden_one Alexandru Isaila
2018-08-03 13:53 ` [PATCH v15 09/14] x86/hvm: Introduce lapic_save_regs_one func Alexandru Isaila
2018-08-07 12:09 ` Jan Beulich
2018-08-07 12:37 ` Isaila Alexandru
2018-08-03 13:53 ` [PATCH v15 10/14] x86/hvm: Add handler for save_one funcs Alexandru Isaila
2018-08-03 13:53 ` [PATCH v15 11/14] x86/domctl: Use hvm_save_vcpu_handler Alexandru Isaila
2018-08-07 12:25 ` Jan Beulich
2018-08-03 13:53 ` [PATCH v15 12/14] x86/hvm: Drop the use of save functions Alexandru Isaila
2018-08-07 12:28 ` Jan Beulich
2018-08-07 12:41 ` Jan Beulich
2018-08-03 13:53 ` [PATCH v15 13/14] x86/hvm: Remove redundant " Alexandru Isaila
2018-08-07 12:47 ` Jan Beulich
2018-08-03 13:53 ` [PATCH v15 14/14] x86/domctl: Don't pause the whole domain if only getting vcpu state Alexandru Isaila
2018-08-07 12:58 ` Jan Beulich
2018-08-07 12:59 ` [PATCH v15 00/14] x86/domctl: Save info for one vcpu instance Jan Beulich
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).