* [PATCH v14 01/11] x86/cpu: Introduce vmce_save_vcpu_ctxt_one() func
2018-07-25 12:14 [PATCH v14 00/11] x86/domctl: Save info for one vcpu instance Alexandru Isaila
@ 2018-07-25 12:14 ` Alexandru Isaila
2018-07-31 11:56 ` Jan Beulich
2018-07-25 12:14 ` [PATCH v14 02/11] x86/hvm: Introduce hvm_save_tsc_adjust_one() func Alexandru Isaila
` (10 subsequent siblings)
11 siblings, 1 reply; 33+ messages in thread
From: Alexandru Isaila @ 2018-07-25 12:14 UTC (permalink / raw)
To: xen-devel
Cc: wei.liu2, andrew.cooper3, ian.jackson, paul.durrant, jbeulich,
Alexandru Isaila
This is used to save data from a single instance.
Signed-off-by: Alexandru Isaila <aisaila@bitdefender.com>
---
Changes since V11:
- Removed the memset and added init with {}.
---
xen/arch/x86/cpu/mcheck/vmce.c | 21 +++++++++++++--------
1 file changed, 13 insertions(+), 8 deletions(-)
diff --git a/xen/arch/x86/cpu/mcheck/vmce.c b/xen/arch/x86/cpu/mcheck/vmce.c
index e07cd2f..31e553c 100644
--- a/xen/arch/x86/cpu/mcheck/vmce.c
+++ b/xen/arch/x86/cpu/mcheck/vmce.c
@@ -349,6 +349,18 @@ int vmce_wrmsr(uint32_t msr, uint64_t val)
return ret;
}
+static int vmce_save_vcpu_ctxt_one(struct vcpu *v, hvm_domain_context_t *h)
+{
+ struct hvm_vmce_vcpu ctxt = {
+ .caps = v->arch.vmce.mcg_cap,
+ .mci_ctl2_bank0 = v->arch.vmce.bank[0].mci_ctl2,
+ .mci_ctl2_bank1 = v->arch.vmce.bank[1].mci_ctl2,
+ .mcg_ext_ctl = v->arch.vmce.mcg_ext_ctl,
+ };
+
+ return hvm_save_entry(VMCE_VCPU, v->vcpu_id, h, &ctxt);
+}
+
static int vmce_save_vcpu_ctxt(struct domain *d, hvm_domain_context_t *h)
{
struct vcpu *v;
@@ -356,14 +368,7 @@ static int vmce_save_vcpu_ctxt(struct domain *d, hvm_domain_context_t *h)
for_each_vcpu ( d, v )
{
- struct hvm_vmce_vcpu ctxt = {
- .caps = v->arch.vmce.mcg_cap,
- .mci_ctl2_bank0 = v->arch.vmce.bank[0].mci_ctl2,
- .mci_ctl2_bank1 = v->arch.vmce.bank[1].mci_ctl2,
- .mcg_ext_ctl = v->arch.vmce.mcg_ext_ctl,
- };
-
- err = hvm_save_entry(VMCE_VCPU, v->vcpu_id, h, &ctxt);
+ err = vmce_save_vcpu_ctxt_one(v, h);
if ( err )
break;
}
--
2.7.4
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
^ permalink raw reply related [flat|nested] 33+ messages in thread* Re: [PATCH v14 01/11] x86/cpu: Introduce vmce_save_vcpu_ctxt_one() func
2018-07-25 12:14 ` [PATCH v14 01/11] x86/cpu: Introduce vmce_save_vcpu_ctxt_one() func Alexandru Isaila
@ 2018-07-31 11:56 ` Jan Beulich
2018-07-31 12:06 ` Jan Beulich
0 siblings, 1 reply; 33+ messages in thread
From: Jan Beulich @ 2018-07-31 11:56 UTC (permalink / raw)
To: aisaila; +Cc: Andrew Cooper, Paul Durrant, Wei Liu, Ian Jackson, xen-devel
>>> On 25.07.18 at 14:14, <aisaila@bitdefender.com> wrote:
> --- a/xen/arch/x86/cpu/mcheck/vmce.c
> +++ b/xen/arch/x86/cpu/mcheck/vmce.c
> @@ -349,6 +349,18 @@ int vmce_wrmsr(uint32_t msr, uint64_t val)
> return ret;
> }
>
> +static int vmce_save_vcpu_ctxt_one(struct vcpu *v, hvm_domain_context_t *h)
Afaict v can be pointer to const.
Jan
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: [PATCH v14 01/11] x86/cpu: Introduce vmce_save_vcpu_ctxt_one() func
2018-07-31 11:56 ` Jan Beulich
@ 2018-07-31 12:06 ` Jan Beulich
0 siblings, 0 replies; 33+ messages in thread
From: Jan Beulich @ 2018-07-31 12:06 UTC (permalink / raw)
To: aisaila; +Cc: Andrew Cooper, Paul Durrant, Wei Liu, Ian Jackson, xen-devel
>>> On 31.07.18 at 13:56, <JBeulich@suse.com> wrote:
>>>> On 25.07.18 at 14:14, <aisaila@bitdefender.com> wrote:
>> --- a/xen/arch/x86/cpu/mcheck/vmce.c
>> +++ b/xen/arch/x86/cpu/mcheck/vmce.c
>> @@ -349,6 +349,18 @@ int vmce_wrmsr(uint32_t msr, uint64_t val)
>> return ret;
>> }
>>
>> +static int vmce_save_vcpu_ctxt_one(struct vcpu *v, hvm_domain_context_t *h)
>
> Afaict v can be pointer to const.
Hmm, I guess rather not, or else you'd have to undo that in
patch 8 or 9 or so.
Jan
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
^ permalink raw reply [flat|nested] 33+ messages in thread
* [PATCH v14 02/11] x86/hvm: Introduce hvm_save_tsc_adjust_one() func
2018-07-25 12:14 [PATCH v14 00/11] x86/domctl: Save info for one vcpu instance Alexandru Isaila
2018-07-25 12:14 ` [PATCH v14 01/11] x86/cpu: Introduce vmce_save_vcpu_ctxt_one() func Alexandru Isaila
@ 2018-07-25 12:14 ` Alexandru Isaila
2018-07-25 12:14 ` [PATCH v14 03/11] x86/hvm: Introduce hvm_save_cpu_ctxt_one func Alexandru Isaila
` (9 subsequent siblings)
11 siblings, 0 replies; 33+ messages in thread
From: Alexandru Isaila @ 2018-07-25 12:14 UTC (permalink / raw)
To: xen-devel
Cc: wei.liu2, andrew.cooper3, ian.jackson, paul.durrant, jbeulich,
Alexandru Isaila
This is used to save data from a single instance.
Signed-off-by: Alexandru Isaila <aisaila@bitdefender.com>
---
Changes since V13:
- Moved tsc_adjust to the initializer.
---
xen/arch/x86/hvm/hvm.c | 13 ++++++++++---
1 file changed, 10 insertions(+), 3 deletions(-)
diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 93092d2..d90da9a 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -740,16 +740,23 @@ void hvm_domain_destroy(struct domain *d)
destroy_vpci_mmcfg(d);
}
+static int hvm_save_tsc_adjust_one(struct vcpu *v, hvm_domain_context_t *h)
+{
+ struct hvm_tsc_adjust ctxt = {
+ .tsc_adjust = v->arch.hvm_vcpu.msr_tsc_adjust,
+ };
+
+ return hvm_save_entry(TSC_ADJUST, v->vcpu_id, h, &ctxt);
+}
+
static int hvm_save_tsc_adjust(struct domain *d, hvm_domain_context_t *h)
{
struct vcpu *v;
- struct hvm_tsc_adjust ctxt;
int err = 0;
for_each_vcpu ( d, v )
{
- ctxt.tsc_adjust = v->arch.hvm_vcpu.msr_tsc_adjust;
- err = hvm_save_entry(TSC_ADJUST, v->vcpu_id, h, &ctxt);
+ err = hvm_save_tsc_adjust_one(v, h);
if ( err )
break;
}
--
2.7.4
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
^ permalink raw reply related [flat|nested] 33+ messages in thread* [PATCH v14 03/11] x86/hvm: Introduce hvm_save_cpu_ctxt_one func
2018-07-25 12:14 [PATCH v14 00/11] x86/domctl: Save info for one vcpu instance Alexandru Isaila
2018-07-25 12:14 ` [PATCH v14 01/11] x86/cpu: Introduce vmce_save_vcpu_ctxt_one() func Alexandru Isaila
2018-07-25 12:14 ` [PATCH v14 02/11] x86/hvm: Introduce hvm_save_tsc_adjust_one() func Alexandru Isaila
@ 2018-07-25 12:14 ` Alexandru Isaila
2018-07-31 12:01 ` Jan Beulich
2018-07-25 12:14 ` [PATCH v14 04/11] x86/hvm: Introduce hvm_save_cpu_xsave_states_one Alexandru Isaila
` (8 subsequent siblings)
11 siblings, 1 reply; 33+ messages in thread
From: Alexandru Isaila @ 2018-07-25 12:14 UTC (permalink / raw)
To: xen-devel
Cc: wei.liu2, andrew.cooper3, ian.jackson, paul.durrant, jbeulich,
Alexandru Isaila
This is used to save data from a single instance.
Signed-off-by: Alexandru Isaila <aisaila@bitdefender.com>
---
Changes since V12:
- Changed memset to {} init.
---
xen/arch/x86/hvm/hvm.c | 214 +++++++++++++++++++++++++------------------------
1 file changed, 111 insertions(+), 103 deletions(-)
diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index d90da9a..720204c 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -787,119 +787,127 @@ static int hvm_load_tsc_adjust(struct domain *d, hvm_domain_context_t *h)
HVM_REGISTER_SAVE_RESTORE(TSC_ADJUST, hvm_save_tsc_adjust,
hvm_load_tsc_adjust, 1, HVMSR_PER_VCPU);
+static int hvm_save_cpu_ctxt_one(struct vcpu *v, hvm_domain_context_t *h)
+{
+ struct segment_register seg;
+ struct hvm_hw_cpu ctxt = {};
+
+ /* Architecture-specific vmcs/vmcb bits */
+ hvm_funcs.save_cpu_ctxt(v, &ctxt);
+
+ ctxt.tsc = hvm_get_guest_tsc_fixed(v, v->domain->arch.hvm_domain.sync_tsc);
+
+ ctxt.msr_tsc_aux = hvm_msr_tsc_aux(v);
+
+ hvm_get_segment_register(v, x86_seg_idtr, &seg);
+ ctxt.idtr_limit = seg.limit;
+ ctxt.idtr_base = seg.base;
+
+ hvm_get_segment_register(v, x86_seg_gdtr, &seg);
+ ctxt.gdtr_limit = seg.limit;
+ ctxt.gdtr_base = seg.base;
+
+ hvm_get_segment_register(v, x86_seg_cs, &seg);
+ ctxt.cs_sel = seg.sel;
+ ctxt.cs_limit = seg.limit;
+ ctxt.cs_base = seg.base;
+ ctxt.cs_arbytes = seg.attr;
+
+ hvm_get_segment_register(v, x86_seg_ds, &seg);
+ ctxt.ds_sel = seg.sel;
+ ctxt.ds_limit = seg.limit;
+ ctxt.ds_base = seg.base;
+ ctxt.ds_arbytes = seg.attr;
+
+ hvm_get_segment_register(v, x86_seg_es, &seg);
+ ctxt.es_sel = seg.sel;
+ ctxt.es_limit = seg.limit;
+ ctxt.es_base = seg.base;
+ ctxt.es_arbytes = seg.attr;
+
+ hvm_get_segment_register(v, x86_seg_ss, &seg);
+ ctxt.ss_sel = seg.sel;
+ ctxt.ss_limit = seg.limit;
+ ctxt.ss_base = seg.base;
+ ctxt.ss_arbytes = seg.attr;
+
+ hvm_get_segment_register(v, x86_seg_fs, &seg);
+ ctxt.fs_sel = seg.sel;
+ ctxt.fs_limit = seg.limit;
+ ctxt.fs_base = seg.base;
+ ctxt.fs_arbytes = seg.attr;
+
+ hvm_get_segment_register(v, x86_seg_gs, &seg);
+ ctxt.gs_sel = seg.sel;
+ ctxt.gs_limit = seg.limit;
+ ctxt.gs_base = seg.base;
+ ctxt.gs_arbytes = seg.attr;
+
+ hvm_get_segment_register(v, x86_seg_tr, &seg);
+ ctxt.tr_sel = seg.sel;
+ ctxt.tr_limit = seg.limit;
+ ctxt.tr_base = seg.base;
+ ctxt.tr_arbytes = seg.attr;
+
+ hvm_get_segment_register(v, x86_seg_ldtr, &seg);
+ ctxt.ldtr_sel = seg.sel;
+ ctxt.ldtr_limit = seg.limit;
+ ctxt.ldtr_base = seg.base;
+ ctxt.ldtr_arbytes = seg.attr;
+
+ if ( v->fpu_initialised )
+ {
+ memcpy(ctxt.fpu_regs, v->arch.fpu_ctxt, sizeof(ctxt.fpu_regs));
+ ctxt.flags = XEN_X86_FPU_INITIALISED;
+ }
+
+ ctxt.rax = v->arch.user_regs.rax;
+ ctxt.rbx = v->arch.user_regs.rbx;
+ ctxt.rcx = v->arch.user_regs.rcx;
+ ctxt.rdx = v->arch.user_regs.rdx;
+ ctxt.rbp = v->arch.user_regs.rbp;
+ ctxt.rsi = v->arch.user_regs.rsi;
+ ctxt.rdi = v->arch.user_regs.rdi;
+ ctxt.rsp = v->arch.user_regs.rsp;
+ ctxt.rip = v->arch.user_regs.rip;
+ ctxt.rflags = v->arch.user_regs.rflags;
+ ctxt.r8 = v->arch.user_regs.r8;
+ ctxt.r9 = v->arch.user_regs.r9;
+ ctxt.r10 = v->arch.user_regs.r10;
+ ctxt.r11 = v->arch.user_regs.r11;
+ ctxt.r12 = v->arch.user_regs.r12;
+ ctxt.r13 = v->arch.user_regs.r13;
+ ctxt.r14 = v->arch.user_regs.r14;
+ ctxt.r15 = v->arch.user_regs.r15;
+ ctxt.dr0 = v->arch.debugreg[0];
+ ctxt.dr1 = v->arch.debugreg[1];
+ ctxt.dr2 = v->arch.debugreg[2];
+ ctxt.dr3 = v->arch.debugreg[3];
+ ctxt.dr6 = v->arch.debugreg[6];
+ ctxt.dr7 = v->arch.debugreg[7];
+
+ return hvm_save_entry(CPU, v->vcpu_id, h, &ctxt);
+}
+
static int hvm_save_cpu_ctxt(struct domain *d, hvm_domain_context_t *h)
{
struct vcpu *v;
- struct hvm_hw_cpu ctxt;
- struct segment_register seg;
+ int err = 0;
for_each_vcpu ( d, v )
{
- /* We don't need to save state for a vcpu that is down; the restore
- * code will leave it down if there is nothing saved. */
+ /*
+ * We don't need to save state for a vcpu that is down; the restore
+ * code will leave it down if there is nothing saved.
+ */
if ( v->pause_flags & VPF_down )
continue;
- memset(&ctxt, 0, sizeof(ctxt));
-
- /* Architecture-specific vmcs/vmcb bits */
- hvm_funcs.save_cpu_ctxt(v, &ctxt);
-
- ctxt.tsc = hvm_get_guest_tsc_fixed(v, d->arch.hvm_domain.sync_tsc);
-
- ctxt.msr_tsc_aux = hvm_msr_tsc_aux(v);
-
- hvm_get_segment_register(v, x86_seg_idtr, &seg);
- ctxt.idtr_limit = seg.limit;
- ctxt.idtr_base = seg.base;
-
- hvm_get_segment_register(v, x86_seg_gdtr, &seg);
- ctxt.gdtr_limit = seg.limit;
- ctxt.gdtr_base = seg.base;
-
- hvm_get_segment_register(v, x86_seg_cs, &seg);
- ctxt.cs_sel = seg.sel;
- ctxt.cs_limit = seg.limit;
- ctxt.cs_base = seg.base;
- ctxt.cs_arbytes = seg.attr;
-
- hvm_get_segment_register(v, x86_seg_ds, &seg);
- ctxt.ds_sel = seg.sel;
- ctxt.ds_limit = seg.limit;
- ctxt.ds_base = seg.base;
- ctxt.ds_arbytes = seg.attr;
-
- hvm_get_segment_register(v, x86_seg_es, &seg);
- ctxt.es_sel = seg.sel;
- ctxt.es_limit = seg.limit;
- ctxt.es_base = seg.base;
- ctxt.es_arbytes = seg.attr;
-
- hvm_get_segment_register(v, x86_seg_ss, &seg);
- ctxt.ss_sel = seg.sel;
- ctxt.ss_limit = seg.limit;
- ctxt.ss_base = seg.base;
- ctxt.ss_arbytes = seg.attr;
-
- hvm_get_segment_register(v, x86_seg_fs, &seg);
- ctxt.fs_sel = seg.sel;
- ctxt.fs_limit = seg.limit;
- ctxt.fs_base = seg.base;
- ctxt.fs_arbytes = seg.attr;
-
- hvm_get_segment_register(v, x86_seg_gs, &seg);
- ctxt.gs_sel = seg.sel;
- ctxt.gs_limit = seg.limit;
- ctxt.gs_base = seg.base;
- ctxt.gs_arbytes = seg.attr;
-
- hvm_get_segment_register(v, x86_seg_tr, &seg);
- ctxt.tr_sel = seg.sel;
- ctxt.tr_limit = seg.limit;
- ctxt.tr_base = seg.base;
- ctxt.tr_arbytes = seg.attr;
-
- hvm_get_segment_register(v, x86_seg_ldtr, &seg);
- ctxt.ldtr_sel = seg.sel;
- ctxt.ldtr_limit = seg.limit;
- ctxt.ldtr_base = seg.base;
- ctxt.ldtr_arbytes = seg.attr;
-
- if ( v->fpu_initialised )
- {
- memcpy(ctxt.fpu_regs, v->arch.fpu_ctxt, sizeof(ctxt.fpu_regs));
- ctxt.flags = XEN_X86_FPU_INITIALISED;
- }
-
- ctxt.rax = v->arch.user_regs.rax;
- ctxt.rbx = v->arch.user_regs.rbx;
- ctxt.rcx = v->arch.user_regs.rcx;
- ctxt.rdx = v->arch.user_regs.rdx;
- ctxt.rbp = v->arch.user_regs.rbp;
- ctxt.rsi = v->arch.user_regs.rsi;
- ctxt.rdi = v->arch.user_regs.rdi;
- ctxt.rsp = v->arch.user_regs.rsp;
- ctxt.rip = v->arch.user_regs.rip;
- ctxt.rflags = v->arch.user_regs.rflags;
- ctxt.r8 = v->arch.user_regs.r8;
- ctxt.r9 = v->arch.user_regs.r9;
- ctxt.r10 = v->arch.user_regs.r10;
- ctxt.r11 = v->arch.user_regs.r11;
- ctxt.r12 = v->arch.user_regs.r12;
- ctxt.r13 = v->arch.user_regs.r13;
- ctxt.r14 = v->arch.user_regs.r14;
- ctxt.r15 = v->arch.user_regs.r15;
- ctxt.dr0 = v->arch.debugreg[0];
- ctxt.dr1 = v->arch.debugreg[1];
- ctxt.dr2 = v->arch.debugreg[2];
- ctxt.dr3 = v->arch.debugreg[3];
- ctxt.dr6 = v->arch.debugreg[6];
- ctxt.dr7 = v->arch.debugreg[7];
-
- if ( hvm_save_entry(CPU, v->vcpu_id, h, &ctxt) != 0 )
- return 1;
+ err = hvm_save_cpu_ctxt_one(v, h);
+ if ( err )
+ break;
}
- return 0;
+ return err;
}
/* Return a string indicating the error, or NULL for valid. */
--
2.7.4
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
^ permalink raw reply related [flat|nested] 33+ messages in thread* Re: [PATCH v14 03/11] x86/hvm: Introduce hvm_save_cpu_ctxt_one func
2018-07-25 12:14 ` [PATCH v14 03/11] x86/hvm: Introduce hvm_save_cpu_ctxt_one func Alexandru Isaila
@ 2018-07-31 12:01 ` Jan Beulich
0 siblings, 0 replies; 33+ messages in thread
From: Jan Beulich @ 2018-07-31 12:01 UTC (permalink / raw)
To: aisaila; +Cc: Andrew Cooper, Paul Durrant, Wei Liu, Ian Jackson, xen-devel
>>> On 25.07.18 at 14:14, <aisaila@bitdefender.com> wrote:
> This is used to save data from a single instance.
>
> Signed-off-by: Alexandru Isaila <aisaila@bitdefender.com>
>
> ---
> Changes since V12:
> - Changed memset to {} init.
> ---
> xen/arch/x86/hvm/hvm.c | 214 +++++++++++++++++++++++++------------------------
> 1 file changed, 111 insertions(+), 103 deletions(-)
>
> diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
> index d90da9a..720204c 100644
> --- a/xen/arch/x86/hvm/hvm.c
> +++ b/xen/arch/x86/hvm/hvm.c
> @@ -787,119 +787,127 @@ static int hvm_load_tsc_adjust(struct domain *d, hvm_domain_context_t *h)
> HVM_REGISTER_SAVE_RESTORE(TSC_ADJUST, hvm_save_tsc_adjust,
> hvm_load_tsc_adjust, 1, HVMSR_PER_VCPU);
>
> +static int hvm_save_cpu_ctxt_one(struct vcpu *v, hvm_domain_context_t *h)
> +{
> + struct segment_register seg;
> + struct hvm_hw_cpu ctxt = {};
I think you want to move into this initializer anything that can go
there and that does not conflict with ...
> + /* Architecture-specific vmcs/vmcb bits */
> + hvm_funcs.save_cpu_ctxt(v, &ctxt);
... this call. No need to write all these fields twice.
> static int hvm_save_cpu_ctxt(struct domain *d, hvm_domain_context_t *h)
> {
> struct vcpu *v;
> - struct hvm_hw_cpu ctxt;
> - struct segment_register seg;
> + int err = 0;
>
> for_each_vcpu ( d, v )
> {
> - /* We don't need to save state for a vcpu that is down; the restore
> - * code will leave it down if there is nothing saved. */
> + /*
> + * We don't need to save state for a vcpu that is down; the restore
> + * code will leave it down if there is nothing saved.
> + */
> if ( v->pause_flags & VPF_down )
> continue;
>
> - memset(&ctxt, 0, sizeof(ctxt));
> -
> - /* Architecture-specific vmcs/vmcb bits */
> - hvm_funcs.save_cpu_ctxt(v, &ctxt);
> -
> - ctxt.tsc = hvm_get_guest_tsc_fixed(v, d->arch.hvm_domain.sync_tsc);
> -
> - ctxt.msr_tsc_aux = hvm_msr_tsc_aux(v);
> -
> - hvm_get_segment_register(v, x86_seg_idtr, &seg);
> - ctxt.idtr_limit = seg.limit;
> - ctxt.idtr_base = seg.base;
> -
> - hvm_get_segment_register(v, x86_seg_gdtr, &seg);
> - ctxt.gdtr_limit = seg.limit;
> - ctxt.gdtr_base = seg.base;
> -
> - hvm_get_segment_register(v, x86_seg_cs, &seg);
> - ctxt.cs_sel = seg.sel;
> - ctxt.cs_limit = seg.limit;
> - ctxt.cs_base = seg.base;
> - ctxt.cs_arbytes = seg.attr;
> -
> - hvm_get_segment_register(v, x86_seg_ds, &seg);
> - ctxt.ds_sel = seg.sel;
> - ctxt.ds_limit = seg.limit;
> - ctxt.ds_base = seg.base;
> - ctxt.ds_arbytes = seg.attr;
> -
> - hvm_get_segment_register(v, x86_seg_es, &seg);
> - ctxt.es_sel = seg.sel;
> - ctxt.es_limit = seg.limit;
> - ctxt.es_base = seg.base;
> - ctxt.es_arbytes = seg.attr;
> -
> - hvm_get_segment_register(v, x86_seg_ss, &seg);
> - ctxt.ss_sel = seg.sel;
> - ctxt.ss_limit = seg.limit;
> - ctxt.ss_base = seg.base;
> - ctxt.ss_arbytes = seg.attr;
> -
> - hvm_get_segment_register(v, x86_seg_fs, &seg);
> - ctxt.fs_sel = seg.sel;
> - ctxt.fs_limit = seg.limit;
> - ctxt.fs_base = seg.base;
> - ctxt.fs_arbytes = seg.attr;
> -
> - hvm_get_segment_register(v, x86_seg_gs, &seg);
> - ctxt.gs_sel = seg.sel;
> - ctxt.gs_limit = seg.limit;
> - ctxt.gs_base = seg.base;
> - ctxt.gs_arbytes = seg.attr;
> -
> - hvm_get_segment_register(v, x86_seg_tr, &seg);
> - ctxt.tr_sel = seg.sel;
> - ctxt.tr_limit = seg.limit;
> - ctxt.tr_base = seg.base;
> - ctxt.tr_arbytes = seg.attr;
> -
> - hvm_get_segment_register(v, x86_seg_ldtr, &seg);
> - ctxt.ldtr_sel = seg.sel;
> - ctxt.ldtr_limit = seg.limit;
> - ctxt.ldtr_base = seg.base;
> - ctxt.ldtr_arbytes = seg.attr;
> -
> - if ( v->fpu_initialised )
> - {
> - memcpy(ctxt.fpu_regs, v->arch.fpu_ctxt, sizeof(ctxt.fpu_regs));
> - ctxt.flags = XEN_X86_FPU_INITIALISED;
> - }
> -
> - ctxt.rax = v->arch.user_regs.rax;
> - ctxt.rbx = v->arch.user_regs.rbx;
> - ctxt.rcx = v->arch.user_regs.rcx;
> - ctxt.rdx = v->arch.user_regs.rdx;
> - ctxt.rbp = v->arch.user_regs.rbp;
> - ctxt.rsi = v->arch.user_regs.rsi;
> - ctxt.rdi = v->arch.user_regs.rdi;
> - ctxt.rsp = v->arch.user_regs.rsp;
> - ctxt.rip = v->arch.user_regs.rip;
> - ctxt.rflags = v->arch.user_regs.rflags;
> - ctxt.r8 = v->arch.user_regs.r8;
> - ctxt.r9 = v->arch.user_regs.r9;
> - ctxt.r10 = v->arch.user_regs.r10;
> - ctxt.r11 = v->arch.user_regs.r11;
> - ctxt.r12 = v->arch.user_regs.r12;
> - ctxt.r13 = v->arch.user_regs.r13;
> - ctxt.r14 = v->arch.user_regs.r14;
> - ctxt.r15 = v->arch.user_regs.r15;
> - ctxt.dr0 = v->arch.debugreg[0];
> - ctxt.dr1 = v->arch.debugreg[1];
> - ctxt.dr2 = v->arch.debugreg[2];
> - ctxt.dr3 = v->arch.debugreg[3];
> - ctxt.dr6 = v->arch.debugreg[6];
> - ctxt.dr7 = v->arch.debugreg[7];
> -
> - if ( hvm_save_entry(CPU, v->vcpu_id, h, &ctxt) != 0 )
> - return 1;
> + err = hvm_save_cpu_ctxt_one(v, h);
> + if ( err )
> + break;
> }
> - return 0;
> + return err;
> }
Please take the opportunity and add the missing blank line ahead of
this main return statement of the function.
Jan
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
^ permalink raw reply [flat|nested] 33+ messages in thread
* [PATCH v14 04/11] x86/hvm: Introduce hvm_save_cpu_xsave_states_one
2018-07-25 12:14 [PATCH v14 00/11] x86/domctl: Save info for one vcpu instance Alexandru Isaila
` (2 preceding siblings ...)
2018-07-25 12:14 ` [PATCH v14 03/11] x86/hvm: Introduce hvm_save_cpu_ctxt_one func Alexandru Isaila
@ 2018-07-25 12:14 ` Alexandru Isaila
2018-07-31 12:04 ` Jan Beulich
2018-07-25 12:14 ` [PATCH v14 05/11] x86/hvm: Introduce hvm_save_cpu_msrs_one func Alexandru Isaila
` (7 subsequent siblings)
11 siblings, 1 reply; 33+ messages in thread
From: Alexandru Isaila @ 2018-07-25 12:14 UTC (permalink / raw)
To: xen-devel
Cc: wei.liu2, andrew.cooper3, ian.jackson, paul.durrant, jbeulich,
Alexandru Isaila
This is used to save data from a single instance.
Signed-off-by: Alexandru Isaila <aisaila@bitdefender.com>
---
Changes since V11:
- hvm_save_cpu_xsave_states_one() returns the err from
_hvm_init_entry().
- hvm_save_cpu_xsave_states() returns err from
hvm_save_cpu_xsave_states_one();
---
xen/arch/x86/hvm/hvm.c | 42 +++++++++++++++++++++++++++---------------
1 file changed, 27 insertions(+), 15 deletions(-)
diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 720204c..a6708f5 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -1188,33 +1188,45 @@ HVM_REGISTER_SAVE_RESTORE(CPU, hvm_save_cpu_ctxt, hvm_load_cpu_ctxt,
save_area) + \
xstate_ctxt_size(xcr0))
-static int hvm_save_cpu_xsave_states(struct domain *d, hvm_domain_context_t *h)
+static int hvm_save_cpu_xsave_states_one(struct vcpu *v, hvm_domain_context_t *h)
{
- struct vcpu *v;
struct hvm_hw_cpu_xsave *ctxt;
+ unsigned int size = HVM_CPU_XSAVE_SIZE(v->arch.xcr0_accum);
+ int err = 0;
if ( !cpu_has_xsave )
return 0; /* do nothing */
+ err = _hvm_init_entry(h, CPU_XSAVE_CODE, v->vcpu_id, size);
+ if ( err )
+ return err;
+
+ ctxt = (struct hvm_hw_cpu_xsave *)&h->data[h->cur];
+ h->cur += size;
+ ctxt->xfeature_mask = xfeature_mask;
+ ctxt->xcr0 = v->arch.xcr0;
+ ctxt->xcr0_accum = v->arch.xcr0_accum;
+
+ expand_xsave_states(v, &ctxt->save_area,
+ size - offsetof(typeof(*ctxt), save_area));
+ return 0;
+}
+
+static int hvm_save_cpu_xsave_states(struct domain *d, hvm_domain_context_t *h)
+{
+ struct vcpu *v;
+ int err = 0;
+
for_each_vcpu ( d, v )
{
- unsigned int size = HVM_CPU_XSAVE_SIZE(v->arch.xcr0_accum);
-
if ( !xsave_enabled(v) )
continue;
- if ( _hvm_init_entry(h, CPU_XSAVE_CODE, v->vcpu_id, size) )
- return 1;
- ctxt = (struct hvm_hw_cpu_xsave *)&h->data[h->cur];
- h->cur += size;
-
- ctxt->xfeature_mask = xfeature_mask;
- ctxt->xcr0 = v->arch.xcr0;
- ctxt->xcr0_accum = v->arch.xcr0_accum;
- expand_xsave_states(v, &ctxt->save_area,
- size - offsetof(typeof(*ctxt), save_area));
+ err = hvm_save_cpu_xsave_states_one(v, h);
+ if ( err )
+ break;
}
- return 0;
+ return err;
}
/*
--
2.7.4
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
^ permalink raw reply related [flat|nested] 33+ messages in thread* Re: [PATCH v14 04/11] x86/hvm: Introduce hvm_save_cpu_xsave_states_one
2018-07-25 12:14 ` [PATCH v14 04/11] x86/hvm: Introduce hvm_save_cpu_xsave_states_one Alexandru Isaila
@ 2018-07-31 12:04 ` Jan Beulich
0 siblings, 0 replies; 33+ messages in thread
From: Jan Beulich @ 2018-07-31 12:04 UTC (permalink / raw)
To: aisaila; +Cc: Andrew Cooper, Paul Durrant, Wei Liu, Ian Jackson, xen-devel
>>> On 25.07.18 at 14:14, <aisaila@bitdefender.com> wrote:
> --- a/xen/arch/x86/hvm/hvm.c
> +++ b/xen/arch/x86/hvm/hvm.c
> @@ -1188,33 +1188,45 @@ HVM_REGISTER_SAVE_RESTORE(CPU, hvm_save_cpu_ctxt, hvm_load_cpu_ctxt,
> save_area) + \
> xstate_ctxt_size(xcr0))
>
> -static int hvm_save_cpu_xsave_states(struct domain *d, hvm_domain_context_t *h)
> +static int hvm_save_cpu_xsave_states_one(struct vcpu *v, hvm_domain_context_t *h)
> {
> - struct vcpu *v;
> struct hvm_hw_cpu_xsave *ctxt;
> + unsigned int size = HVM_CPU_XSAVE_SIZE(v->arch.xcr0_accum);
> + int err = 0;
Pointless initializer.
> if ( !cpu_has_xsave )
> return 0; /* do nothing */
>
> + err = _hvm_init_entry(h, CPU_XSAVE_CODE, v->vcpu_id, size);
> + if ( err )
> + return err;
> +
> + ctxt = (struct hvm_hw_cpu_xsave *)&h->data[h->cur];
> + h->cur += size;
> + ctxt->xfeature_mask = xfeature_mask;
> + ctxt->xcr0 = v->arch.xcr0;
> + ctxt->xcr0_accum = v->arch.xcr0_accum;
> +
> + expand_xsave_states(v, &ctxt->save_area,
> + size - offsetof(typeof(*ctxt), save_area));
> + return 0;
> +}
Blank line please ahead of main return statement of a function.
Jan
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
^ permalink raw reply [flat|nested] 33+ messages in thread
* [PATCH v14 05/11] x86/hvm: Introduce hvm_save_cpu_msrs_one func
2018-07-25 12:14 [PATCH v14 00/11] x86/domctl: Save info for one vcpu instance Alexandru Isaila
` (3 preceding siblings ...)
2018-07-25 12:14 ` [PATCH v14 04/11] x86/hvm: Introduce hvm_save_cpu_xsave_states_one Alexandru Isaila
@ 2018-07-25 12:14 ` Alexandru Isaila
2018-07-31 12:10 ` Jan Beulich
2018-07-25 12:14 ` [PATCH v14 06/11] x86/hvm: Introduce hvm_save_mtrr_msr_one func Alexandru Isaila
` (6 subsequent siblings)
11 siblings, 1 reply; 33+ messages in thread
From: Alexandru Isaila @ 2018-07-25 12:14 UTC (permalink / raw)
To: xen-devel
Cc: wei.liu2, andrew.cooper3, ian.jackson, paul.durrant, jbeulich,
Alexandru Isaila
This is used to save data from a single instance.
Signed-off-by: Alexandru Isaila <aisaila@bitdefender.com>
Reviewed-by: Paul Durrant <paul.durrant@citrix.com>
---
Changes since V11:
- hvm_save_cpu_msrs() returns err from hvm_save_cpu_msrs_one().
---
xen/arch/x86/hvm/hvm.c | 105 +++++++++++++++++++++++++++----------------------
1 file changed, 58 insertions(+), 47 deletions(-)
diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index a6708f5..a18b9a6 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -1366,69 +1366,80 @@ static const uint32_t msrs_to_send[] = {
};
static unsigned int __read_mostly msr_count_max = ARRAY_SIZE(msrs_to_send);
-static int hvm_save_cpu_msrs(struct domain *d, hvm_domain_context_t *h)
+static int hvm_save_cpu_msrs_one(struct vcpu *v, hvm_domain_context_t *h)
{
- struct vcpu *v;
+ struct hvm_save_descriptor *desc = _p(&h->data[h->cur]);
+ struct hvm_msr *ctxt;
+ unsigned int i;
+ int err = 0;
- for_each_vcpu ( d, v )
+ err = _hvm_init_entry(h, CPU_MSR_CODE, v->vcpu_id,
+ HVM_CPU_MSR_SIZE(msr_count_max));
+ if ( err )
+ return err;
+ ctxt = (struct hvm_msr *)&h->data[h->cur];
+ ctxt->count = 0;
+
+ for ( i = 0; i < ARRAY_SIZE(msrs_to_send); ++i )
{
- struct hvm_save_descriptor *desc = _p(&h->data[h->cur]);
- struct hvm_msr *ctxt;
- unsigned int i;
+ uint64_t val;
+ int rc = guest_rdmsr(v, msrs_to_send[i], &val);
- if ( _hvm_init_entry(h, CPU_MSR_CODE, v->vcpu_id,
- HVM_CPU_MSR_SIZE(msr_count_max)) )
- return 1;
- ctxt = (struct hvm_msr *)&h->data[h->cur];
- ctxt->count = 0;
+ /*
+ * It is the programmers responsibility to ensure that
+ * msrs_to_send[] contain generally-read/write MSRs.
+ * X86EMUL_EXCEPTION here implies a missing feature, and that the
+ * guest doesn't have access to the MSR.
+ */
+ if ( rc == X86EMUL_EXCEPTION )
+ continue;
- for ( i = 0; i < ARRAY_SIZE(msrs_to_send); ++i )
+ if ( rc != X86EMUL_OKAY )
{
- uint64_t val;
- int rc = guest_rdmsr(v, msrs_to_send[i], &val);
+ ASSERT_UNREACHABLE();
+ return -ENXIO;
+ }
- /*
- * It is the programmers responsibility to ensure that
- * msrs_to_send[] contain generally-read/write MSRs.
- * X86EMUL_EXCEPTION here implies a missing feature, and that the
- * guest doesn't have access to the MSR.
- */
- if ( rc == X86EMUL_EXCEPTION )
- continue;
+ if ( !val )
+ continue; /* Skip empty MSRs. */
- if ( rc != X86EMUL_OKAY )
- {
- ASSERT_UNREACHABLE();
- return -ENXIO;
- }
+ ctxt->msr[ctxt->count].index = msrs_to_send[i];
+ ctxt->msr[ctxt->count++].val = val;
+ }
- if ( !val )
- continue; /* Skip empty MSRs. */
+ if ( hvm_funcs.save_msr )
+ hvm_funcs.save_msr(v, ctxt);
- ctxt->msr[ctxt->count].index = msrs_to_send[i];
- ctxt->msr[ctxt->count++].val = val;
- }
+ ASSERT(ctxt->count <= msr_count_max);
- if ( hvm_funcs.save_msr )
- hvm_funcs.save_msr(v, ctxt);
+ for ( i = 0; i < ctxt->count; ++i )
+ ctxt->msr[i]._rsvd = 0;
- ASSERT(ctxt->count <= msr_count_max);
+ if ( ctxt->count )
+ {
+ /* Rewrite length to indicate how much space we actually used. */
+ desc->length = HVM_CPU_MSR_SIZE(ctxt->count);
+ h->cur += HVM_CPU_MSR_SIZE(ctxt->count);
+ }
+ else
+ /* or rewind and remove the descriptor from the stream. */
+ h->cur -= sizeof(struct hvm_save_descriptor);
+ return 0;
+}
- for ( i = 0; i < ctxt->count; ++i )
- ctxt->msr[i]._rsvd = 0;
+static int hvm_save_cpu_msrs(struct domain *d, hvm_domain_context_t *h)
+{
+ struct vcpu *v;
+ int err = 0;
- if ( ctxt->count )
- {
- /* Rewrite length to indicate how much space we actually used. */
- desc->length = HVM_CPU_MSR_SIZE(ctxt->count);
- h->cur += HVM_CPU_MSR_SIZE(ctxt->count);
- }
- else
- /* or rewind and remove the descriptor from the stream. */
- h->cur -= sizeof(struct hvm_save_descriptor);
+ for_each_vcpu ( d, v )
+ {
+ err = hvm_save_cpu_msrs_one(v, h);
+ if ( err )
+ break;
}
- return 0;
+ return err;
}
static int hvm_load_cpu_msrs(struct domain *d, hvm_domain_context_t *h)
--
2.7.4
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
^ permalink raw reply related [flat|nested] 33+ messages in thread* Re: [PATCH v14 05/11] x86/hvm: Introduce hvm_save_cpu_msrs_one func
2018-07-25 12:14 ` [PATCH v14 05/11] x86/hvm: Introduce hvm_save_cpu_msrs_one func Alexandru Isaila
@ 2018-07-31 12:10 ` Jan Beulich
0 siblings, 0 replies; 33+ messages in thread
From: Jan Beulich @ 2018-07-31 12:10 UTC (permalink / raw)
To: aisaila; +Cc: Andrew Cooper, Paul Durrant, Wei Liu, Ian Jackson, xen-devel
>>> On 25.07.18 at 14:14, <aisaila@bitdefender.com> wrote:
> --- a/xen/arch/x86/hvm/hvm.c
> +++ b/xen/arch/x86/hvm/hvm.c
> @@ -1366,69 +1366,80 @@ static const uint32_t msrs_to_send[] = {
> };
> static unsigned int __read_mostly msr_count_max = ARRAY_SIZE(msrs_to_send);
>
> -static int hvm_save_cpu_msrs(struct domain *d, hvm_domain_context_t *h)
> +static int hvm_save_cpu_msrs_one(struct vcpu *v, hvm_domain_context_t *h)
> {
> - struct vcpu *v;
> + struct hvm_save_descriptor *desc = _p(&h->data[h->cur]);
> + struct hvm_msr *ctxt;
> + unsigned int i;
> + int err = 0;
Pointless initializer again.
> - for_each_vcpu ( d, v )
> + err = _hvm_init_entry(h, CPU_MSR_CODE, v->vcpu_id,
> + HVM_CPU_MSR_SIZE(msr_count_max));
> + if ( err )
> + return err;
> + ctxt = (struct hvm_msr *)&h->data[h->cur];
> + ctxt->count = 0;
> +
> + for ( i = 0; i < ARRAY_SIZE(msrs_to_send); ++i )
> {
> - struct hvm_save_descriptor *desc = _p(&h->data[h->cur]);
> - struct hvm_msr *ctxt;
> - unsigned int i;
> + uint64_t val;
> + int rc = guest_rdmsr(v, msrs_to_send[i], &val);
>
> - if ( _hvm_init_entry(h, CPU_MSR_CODE, v->vcpu_id,
> - HVM_CPU_MSR_SIZE(msr_count_max)) )
> - return 1;
> - ctxt = (struct hvm_msr *)&h->data[h->cur];
> - ctxt->count = 0;
> + /*
> + * It is the programmers responsibility to ensure that
> + * msrs_to_send[] contain generally-read/write MSRs.
> + * X86EMUL_EXCEPTION here implies a missing feature, and that the
> + * guest doesn't have access to the MSR.
> + */
> + if ( rc == X86EMUL_EXCEPTION )
> + continue;
>
> - for ( i = 0; i < ARRAY_SIZE(msrs_to_send); ++i )
> + if ( rc != X86EMUL_OKAY )
> {
> - uint64_t val;
> - int rc = guest_rdmsr(v, msrs_to_send[i], &val);
> + ASSERT_UNREACHABLE();
> + return -ENXIO;
> + }
>
> - /*
> - * It is the programmers responsibility to ensure that
> - * msrs_to_send[] contain generally-read/write MSRs.
> - * X86EMUL_EXCEPTION here implies a missing feature, and that the
> - * guest doesn't have access to the MSR.
> - */
> - if ( rc == X86EMUL_EXCEPTION )
> - continue;
> + if ( !val )
> + continue; /* Skip empty MSRs. */
>
> - if ( rc != X86EMUL_OKAY )
> - {
> - ASSERT_UNREACHABLE();
> - return -ENXIO;
> - }
> + ctxt->msr[ctxt->count].index = msrs_to_send[i];
> + ctxt->msr[ctxt->count++].val = val;
> + }
>
> - if ( !val )
> - continue; /* Skip empty MSRs. */
> + if ( hvm_funcs.save_msr )
> + hvm_funcs.save_msr(v, ctxt);
>
> - ctxt->msr[ctxt->count].index = msrs_to_send[i];
> - ctxt->msr[ctxt->count++].val = val;
> - }
> + ASSERT(ctxt->count <= msr_count_max);
>
> - if ( hvm_funcs.save_msr )
> - hvm_funcs.save_msr(v, ctxt);
> + for ( i = 0; i < ctxt->count; ++i )
> + ctxt->msr[i]._rsvd = 0;
>
> - ASSERT(ctxt->count <= msr_count_max);
> + if ( ctxt->count )
> + {
> + /* Rewrite length to indicate how much space we actually used. */
> + desc->length = HVM_CPU_MSR_SIZE(ctxt->count);
> + h->cur += HVM_CPU_MSR_SIZE(ctxt->count);
> + }
> + else
> + /* or rewind and remove the descriptor from the stream. */
> + h->cur -= sizeof(struct hvm_save_descriptor);
> + return 0;
> +}
And blank line ahead of the return again please. I'm not going to
repeat these (or other sufficiently generic remarks given earlier),
and I'll assume you'll apply them to the entire series.
Jan
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
^ permalink raw reply [flat|nested] 33+ messages in thread
* [PATCH v14 06/11] x86/hvm: Introduce hvm_save_mtrr_msr_one func
2018-07-25 12:14 [PATCH v14 00/11] x86/domctl: Save info for one vcpu instance Alexandru Isaila
` (4 preceding siblings ...)
2018-07-25 12:14 ` [PATCH v14 05/11] x86/hvm: Introduce hvm_save_cpu_msrs_one func Alexandru Isaila
@ 2018-07-25 12:14 ` Alexandru Isaila
2018-07-31 12:16 ` Jan Beulich
2018-07-25 12:14 ` [PATCH v14 07/11] x86/hvm: Introduce viridian_save_vcpu_ctxt_one() func Alexandru Isaila
` (5 subsequent siblings)
11 siblings, 1 reply; 33+ messages in thread
From: Alexandru Isaila @ 2018-07-25 12:14 UTC (permalink / raw)
To: xen-devel
Cc: wei.liu2, andrew.cooper3, ian.jackson, paul.durrant, jbeulich,
Alexandru Isaila
This is used to save data from a single instance.
Signed-off-by: Alexandru Isaila <aisaila@bitdefender.com>
---
Changes since v11:
- hvm_save_mtrr_msr() now returns err from hvm_save_mtrr_msr_one().
Note: This patch is based on Roger Pau Monne's series[1]
---
xen/arch/x86/hvm/mtrr.c | 81 +++++++++++++++++++++++++++----------------------
1 file changed, 44 insertions(+), 37 deletions(-)
diff --git a/xen/arch/x86/hvm/mtrr.c b/xen/arch/x86/hvm/mtrr.c
index 48facbb..47a5c29 100644
--- a/xen/arch/x86/hvm/mtrr.c
+++ b/xen/arch/x86/hvm/mtrr.c
@@ -718,52 +718,59 @@ int hvm_set_mem_pinned_cacheattr(struct domain *d, uint64_t gfn_start,
return 0;
}
-static int hvm_save_mtrr_msr(struct domain *d, hvm_domain_context_t *h)
+static int hvm_save_mtrr_msr_one(struct vcpu *v, hvm_domain_context_t *h)
{
- struct vcpu *v;
+ const struct mtrr_state *mtrr_state = &v->arch.hvm_vcpu.mtrr;
+ struct hvm_hw_mtrr hw_mtrr = {
+ .msr_mtrr_def_type = mtrr_state->def_type |
+ MASK_INSR(mtrr_state->fixed_enabled,
+ MTRRdefType_FE) |
+ MASK_INSR(mtrr_state->enabled, MTRRdefType_E),
+ .msr_mtrr_cap = mtrr_state->mtrr_cap,
+ };
+ unsigned int i;
- /* save mtrr&pat */
- for_each_vcpu(d, v)
+ if ( MASK_EXTR(hw_mtrr.msr_mtrr_cap, MTRRcap_VCNT) >
+ (ARRAY_SIZE(hw_mtrr.msr_mtrr_var) / 2) )
{
- const struct mtrr_state *mtrr_state = &v->arch.hvm_vcpu.mtrr;
- struct hvm_hw_mtrr hw_mtrr = {
- .msr_mtrr_def_type = mtrr_state->def_type |
- MASK_INSR(mtrr_state->fixed_enabled,
- MTRRdefType_FE) |
- MASK_INSR(mtrr_state->enabled, MTRRdefType_E),
- .msr_mtrr_cap = mtrr_state->mtrr_cap,
- };
- unsigned int i;
+ dprintk(XENLOG_G_ERR,
+ "HVM save: %pv: too many (%lu) variable range MTRRs\n",
+ v, MASK_EXTR(hw_mtrr.msr_mtrr_cap, MTRRcap_VCNT));
+ return -EINVAL;
+ }
- if ( MASK_EXTR(hw_mtrr.msr_mtrr_cap, MTRRcap_VCNT) >
- (ARRAY_SIZE(hw_mtrr.msr_mtrr_var) / 2) )
- {
- dprintk(XENLOG_G_ERR,
- "HVM save: %pv: too many (%lu) variable range MTRRs\n",
- v, MASK_EXTR(hw_mtrr.msr_mtrr_cap, MTRRcap_VCNT));
- return -EINVAL;
- }
+ hvm_get_guest_pat(v, &hw_mtrr.msr_pat_cr);
- hvm_get_guest_pat(v, &hw_mtrr.msr_pat_cr);
+ for ( i = 0; i < MASK_EXTR(hw_mtrr.msr_mtrr_cap, MTRRcap_VCNT); i++ )
+ {
+ /* save physbase */
+ hw_mtrr.msr_mtrr_var[i*2] =
+ ((uint64_t*)mtrr_state->var_ranges)[i*2];
+ /* save physmask */
+ hw_mtrr.msr_mtrr_var[i*2+1] =
+ ((uint64_t*)mtrr_state->var_ranges)[i*2+1];
+ }
- for ( i = 0; i < MASK_EXTR(hw_mtrr.msr_mtrr_cap, MTRRcap_VCNT); i++ )
- {
- /* save physbase */
- hw_mtrr.msr_mtrr_var[i*2] =
- ((uint64_t*)mtrr_state->var_ranges)[i*2];
- /* save physmask */
- hw_mtrr.msr_mtrr_var[i*2+1] =
- ((uint64_t*)mtrr_state->var_ranges)[i*2+1];
- }
+ for ( i = 0; i < NUM_FIXED_MSR; i++ )
+ hw_mtrr.msr_mtrr_fixed[i] =
+ ((uint64_t*)mtrr_state->fixed_ranges)[i];
- for ( i = 0; i < NUM_FIXED_MSR; i++ )
- hw_mtrr.msr_mtrr_fixed[i] =
- ((uint64_t*)mtrr_state->fixed_ranges)[i];
+ return hvm_save_entry(MTRR, v->vcpu_id, h, &hw_mtrr);
+}
- if ( hvm_save_entry(MTRR, v->vcpu_id, h, &hw_mtrr) != 0 )
- return 1;
+static int hvm_save_mtrr_msr(struct domain *d, hvm_domain_context_t *h)
+{
+ struct vcpu *v;
+ int err = 0;
+
+ /* save mtrr&pat */
+ for_each_vcpu(d, v)
+ {
+ err = hvm_save_mtrr_msr_one(v, h);
+ if ( err )
+ break;
}
- return 0;
+ return err;
}
static int hvm_load_mtrr_msr(struct domain *d, hvm_domain_context_t *h)
--
2.7.4
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
^ permalink raw reply related [flat|nested] 33+ messages in thread* Re: [PATCH v14 06/11] x86/hvm: Introduce hvm_save_mtrr_msr_one func
2018-07-25 12:14 ` [PATCH v14 06/11] x86/hvm: Introduce hvm_save_mtrr_msr_one func Alexandru Isaila
@ 2018-07-31 12:16 ` Jan Beulich
2018-08-01 14:57 ` Isaila Alexandru
0 siblings, 1 reply; 33+ messages in thread
From: Jan Beulich @ 2018-07-31 12:16 UTC (permalink / raw)
To: aisaila; +Cc: Andrew Cooper, Paul Durrant, Wei Liu, Ian Jackson, xen-devel
>>> On 25.07.18 at 14:14, <aisaila@bitdefender.com> wrote:
> This is used to save data from a single instance.
>
> Signed-off-by: Alexandru Isaila <aisaila@bitdefender.com>
>
> ---
> Changes since v11:
> - hvm_save_mtrr_msr() now returns err from hvm_save_mtrr_msr_one().
>
> Note: This patch is based on Roger Pau Monne's series[1]
> ---
> xen/arch/x86/hvm/mtrr.c | 81 +++++++++++++++++++++++++++----------------------
> 1 file changed, 44 insertions(+), 37 deletions(-)
>
> diff --git a/xen/arch/x86/hvm/mtrr.c b/xen/arch/x86/hvm/mtrr.c
> index 48facbb..47a5c29 100644
> --- a/xen/arch/x86/hvm/mtrr.c
> +++ b/xen/arch/x86/hvm/mtrr.c
> @@ -718,52 +718,59 @@ int hvm_set_mem_pinned_cacheattr(struct domain *d,
> uint64_t gfn_start,
> return 0;
> }
>
> -static int hvm_save_mtrr_msr(struct domain *d, hvm_domain_context_t *h)
> +static int hvm_save_mtrr_msr_one(struct vcpu *v, hvm_domain_context_t *h)
> {
> - struct vcpu *v;
> + const struct mtrr_state *mtrr_state = &v->arch.hvm_vcpu.mtrr;
> + struct hvm_hw_mtrr hw_mtrr = {
> + .msr_mtrr_def_type = mtrr_state->def_type |
> + MASK_INSR(mtrr_state->fixed_enabled,
> + MTRRdefType_FE) |
> + MASK_INSR(mtrr_state->enabled, MTRRdefType_E),
> + .msr_mtrr_cap = mtrr_state->mtrr_cap,
> + };
> + unsigned int i;
>
> - /* save mtrr&pat */
> - for_each_vcpu(d, v)
> + if ( MASK_EXTR(hw_mtrr.msr_mtrr_cap, MTRRcap_VCNT) >
> + (ARRAY_SIZE(hw_mtrr.msr_mtrr_var) / 2) )
> {
> - const struct mtrr_state *mtrr_state = &v->arch.hvm_vcpu.mtrr;
> - struct hvm_hw_mtrr hw_mtrr = {
> - .msr_mtrr_def_type = mtrr_state->def_type |
> - MASK_INSR(mtrr_state->fixed_enabled,
> - MTRRdefType_FE) |
> - MASK_INSR(mtrr_state->enabled,
> MTRRdefType_E),
> - .msr_mtrr_cap = mtrr_state->mtrr_cap,
> - };
> - unsigned int i;
> + dprintk(XENLOG_G_ERR,
> + "HVM save: %pv: too many (%lu) variable range MTRRs\n",
> + v, MASK_EXTR(hw_mtrr.msr_mtrr_cap, MTRRcap_VCNT));
> + return -EINVAL;
> + }
>
> - if ( MASK_EXTR(hw_mtrr.msr_mtrr_cap, MTRRcap_VCNT) >
> - (ARRAY_SIZE(hw_mtrr.msr_mtrr_var) / 2) )
> - {
> - dprintk(XENLOG_G_ERR,
> - "HVM save: %pv: too many (%lu) variable range MTRRs\n",
> - v, MASK_EXTR(hw_mtrr.msr_mtrr_cap, MTRRcap_VCNT));
> - return -EINVAL;
> - }
> + hvm_get_guest_pat(v, &hw_mtrr.msr_pat_cr);
>
> - hvm_get_guest_pat(v, &hw_mtrr.msr_pat_cr);
> + for ( i = 0; i < MASK_EXTR(hw_mtrr.msr_mtrr_cap, MTRRcap_VCNT); i++ )
> + {
> + /* save physbase */
> + hw_mtrr.msr_mtrr_var[i*2] =
> + ((uint64_t*)mtrr_state->var_ranges)[i*2];
> + /* save physmask */
> + hw_mtrr.msr_mtrr_var[i*2+1] =
> + ((uint64_t*)mtrr_state->var_ranges)[i*2+1];
> + }
As you move/re-indent code, please fix obvious style violations
(here: missing blanks around binary operators). It also would be
really nice if you got rid of the ugly and risky casts, and used
the actual structure fields instead.
> - for ( i = 0; i < MASK_EXTR(hw_mtrr.msr_mtrr_cap, MTRRcap_VCNT); i++ )
> - {
> - /* save physbase */
> - hw_mtrr.msr_mtrr_var[i*2] =
> - ((uint64_t*)mtrr_state->var_ranges)[i*2];
> - /* save physmask */
> - hw_mtrr.msr_mtrr_var[i*2+1] =
> - ((uint64_t*)mtrr_state->var_ranges)[i*2+1];
> - }
> + for ( i = 0; i < NUM_FIXED_MSR; i++ )
> + hw_mtrr.msr_mtrr_fixed[i] =
> + ((uint64_t*)mtrr_state->fixed_ranges)[i];
Whereas here I think you would best simply use memcpy(), again
to get rid of the cast.
Jan
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
^ permalink raw reply [flat|nested] 33+ messages in thread* Re: [PATCH v14 06/11] x86/hvm: Introduce hvm_save_mtrr_msr_one func
2018-07-31 12:16 ` Jan Beulich
@ 2018-08-01 14:57 ` Isaila Alexandru
2018-08-01 15:03 ` Jan Beulich
0 siblings, 1 reply; 33+ messages in thread
From: Isaila Alexandru @ 2018-08-01 14:57 UTC (permalink / raw)
To: Jan Beulich; +Cc: Andrew Cooper, Paul Durrant, Wei Liu, Ian Jackson, xen-devel
On Ma, 2018-07-31 at 06:16 -0600, Jan Beulich wrote:
> >
> > >
> > > >
> > > > On 25.07.18 at 14:14, <aisaila@bitdefender.com> wrote:
> > This is used to save data from a single instance.
> >
> > Signed-off-by: Alexandru Isaila <aisaila@bitdefender.com>
> >
> > ---
> > Changes since v11:
> > - hvm_save_mtrr_msr() now returns err from
> > hvm_save_mtrr_msr_one().
> >
> > Note: This patch is based on Roger Pau Monne's series[1]
> > ---
> > xen/arch/x86/hvm/mtrr.c | 81 +++++++++++++++++++++++++++--------
> > --------------
> > 1 file changed, 44 insertions(+), 37 deletions(-)
> >
> > diff --git a/xen/arch/x86/hvm/mtrr.c b/xen/arch/x86/hvm/mtrr.c
> > index 48facbb..47a5c29 100644
> > --- a/xen/arch/x86/hvm/mtrr.c
> > +++ b/xen/arch/x86/hvm/mtrr.c
> > @@ -718,52 +718,59 @@ int hvm_set_mem_pinned_cacheattr(struct
> > domain *d,
> > uint64_t gfn_start,
> > return 0;
> > }
> >
> > -static int hvm_save_mtrr_msr(struct domain *d,
> > hvm_domain_context_t *h)
> > +static int hvm_save_mtrr_msr_one(struct vcpu *v,
> > hvm_domain_context_t *h)
> > {
> > - struct vcpu *v;
> > + const struct mtrr_state *mtrr_state = &v->arch.hvm_vcpu.mtrr;
> > + struct hvm_hw_mtrr hw_mtrr = {
> > + .msr_mtrr_def_type = mtrr_state->def_type |
> > + MASK_INSR(mtrr_state->fixed_enabled,
> > + MTRRdefType_FE) |
> > + MASK_INSR(mtrr_state->enabled,
> > MTRRdefType_E),
> > + .msr_mtrr_cap = mtrr_state->mtrr_cap,
> > + };
> > + unsigned int i;
> >
> > - /* save mtrr&pat */
> > - for_each_vcpu(d, v)
> > + if ( MASK_EXTR(hw_mtrr.msr_mtrr_cap, MTRRcap_VCNT) >
> > + (ARRAY_SIZE(hw_mtrr.msr_mtrr_var) / 2) )
> > {
> > - const struct mtrr_state *mtrr_state = &v-
> > >arch.hvm_vcpu.mtrr;
> > - struct hvm_hw_mtrr hw_mtrr = {
> > - .msr_mtrr_def_type = mtrr_state->def_type |
> > - MASK_INSR(mtrr_state-
> > >fixed_enabled,
> > - MTRRdefType_FE) |
> > - MASK_INSR(mtrr_state->enabled,
> > MTRRdefType_E),
> > - .msr_mtrr_cap = mtrr_state->mtrr_cap,
> > - };
> > - unsigned int i;
> > + dprintk(XENLOG_G_ERR,
> > + "HVM save: %pv: too many (%lu) variable range
> > MTRRs\n",
> > + v, MASK_EXTR(hw_mtrr.msr_mtrr_cap, MTRRcap_VCNT));
> > + return -EINVAL;
> > + }
> >
> > - if ( MASK_EXTR(hw_mtrr.msr_mtrr_cap, MTRRcap_VCNT) >
> > - (ARRAY_SIZE(hw_mtrr.msr_mtrr_var) / 2) )
> > - {
> > - dprintk(XENLOG_G_ERR,
> > - "HVM save: %pv: too many (%lu) variable range
> > MTRRs\n",
> > - v, MASK_EXTR(hw_mtrr.msr_mtrr_cap,
> > MTRRcap_VCNT));
> > - return -EINVAL;
> > - }
> > + hvm_get_guest_pat(v, &hw_mtrr.msr_pat_cr);
> >
> > - hvm_get_guest_pat(v, &hw_mtrr.msr_pat_cr);
> > + for ( i = 0; i < MASK_EXTR(hw_mtrr.msr_mtrr_cap,
> > MTRRcap_VCNT); i++ )
> > + {
> > + /* save physbase */
> > + hw_mtrr.msr_mtrr_var[i*2] =
> > + ((uint64_t*)mtrr_state->var_ranges)[i*2];
> > + /* save physmask */
> > + hw_mtrr.msr_mtrr_var[i*2+1] =
> > + ((uint64_t*)mtrr_state->var_ranges)[i*2+1];
> > + }
> As you move/re-indent code, please fix obvious style violations
> (here: missing blanks around binary operators). It also would be
> really nice if you got rid of the ugly and risky casts, and used
> the actual structure fields instead.
Couldn't we just use a
memcpy(hw_mtrr.msr_mtrr_var, mtrr_state->var_ranges,
MASK_EXTR(hw_mtrr.msr_mtrr_cap, MTRRcap_VCNT)) ?
Same as you suggested...
>
> >
> > - for ( i = 0; i < MASK_EXTR(hw_mtrr.msr_mtrr_cap,
> > MTRRcap_VCNT); i++ )
> > - {
> > - /* save physbase */
> > - hw_mtrr.msr_mtrr_var[i*2] =
> > - ((uint64_t*)mtrr_state->var_ranges)[i*2];
> > - /* save physmask */
> > - hw_mtrr.msr_mtrr_var[i*2+1] =
> > - ((uint64_t*)mtrr_state->var_ranges)[i*2+1];
> > - }
> > + for ( i = 0; i < NUM_FIXED_MSR; i++ )
> > + hw_mtrr.msr_mtrr_fixed[i] =
> > + ((uint64_t*)mtrr_state->fixed_ranges)[i];
> Whereas here I think you would best simply use memcpy(), again
> to get rid of the cast.
>
... here
Thanks,
Alex
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
^ permalink raw reply [flat|nested] 33+ messages in thread* Re: [PATCH v14 06/11] x86/hvm: Introduce hvm_save_mtrr_msr_one func
2018-08-01 14:57 ` Isaila Alexandru
@ 2018-08-01 15:03 ` Jan Beulich
0 siblings, 0 replies; 33+ messages in thread
From: Jan Beulich @ 2018-08-01 15:03 UTC (permalink / raw)
To: aisaila; +Cc: Andrew Cooper, Paul Durrant, Wei Liu, Ian Jackson, xen-devel
>>> On 01.08.18 at 16:57, <aisaila@bitdefender.com> wrote:
> On Ma, 2018-07-31 at 06:16 -0600, Jan Beulich wrote:
>> > > > On 25.07.18 at 14:14, <aisaila@bitdefender.com> wrote:
>> > --- a/xen/arch/x86/hvm/mtrr.c
>> > +++ b/xen/arch/x86/hvm/mtrr.c
>> > @@ -718,52 +718,59 @@ int hvm_set_mem_pinned_cacheattr(struct
>> > domain *d,
>> > uint64_t gfn_start,
>> > return 0;
>> > }
>> >
>> > -static int hvm_save_mtrr_msr(struct domain *d,
>> > hvm_domain_context_t *h)
>> > +static int hvm_save_mtrr_msr_one(struct vcpu *v,
>> > hvm_domain_context_t *h)
>> > {
>> > - struct vcpu *v;
>> > + const struct mtrr_state *mtrr_state = &v->arch.hvm_vcpu.mtrr;
>> > + struct hvm_hw_mtrr hw_mtrr = {
>> > + .msr_mtrr_def_type = mtrr_state->def_type |
>> > + MASK_INSR(mtrr_state->fixed_enabled,
>> > + MTRRdefType_FE) |
>> > + MASK_INSR(mtrr_state->enabled,
>> > MTRRdefType_E),
>> > + .msr_mtrr_cap = mtrr_state->mtrr_cap,
>> > + };
>> > + unsigned int i;
>> >
>> > - /* save mtrr&pat */
>> > - for_each_vcpu(d, v)
>> > + if ( MASK_EXTR(hw_mtrr.msr_mtrr_cap, MTRRcap_VCNT) >
>> > + (ARRAY_SIZE(hw_mtrr.msr_mtrr_var) / 2) )
>> > {
>> > - const struct mtrr_state *mtrr_state = &v-
>> > >arch.hvm_vcpu.mtrr;
>> > - struct hvm_hw_mtrr hw_mtrr = {
>> > - .msr_mtrr_def_type = mtrr_state->def_type |
>> > - MASK_INSR(mtrr_state-
>> > >fixed_enabled,
>> > - MTRRdefType_FE) |
>> > - MASK_INSR(mtrr_state->enabled,
>> > MTRRdefType_E),
>> > - .msr_mtrr_cap = mtrr_state->mtrr_cap,
>> > - };
>> > - unsigned int i;
>> > + dprintk(XENLOG_G_ERR,
>> > + "HVM save: %pv: too many (%lu) variable range
>> > MTRRs\n",
>> > + v, MASK_EXTR(hw_mtrr.msr_mtrr_cap, MTRRcap_VCNT));
>> > + return -EINVAL;
>> > + }
>> >
>> > - if ( MASK_EXTR(hw_mtrr.msr_mtrr_cap, MTRRcap_VCNT) >
>> > - (ARRAY_SIZE(hw_mtrr.msr_mtrr_var) / 2) )
>> > - {
>> > - dprintk(XENLOG_G_ERR,
>> > - "HVM save: %pv: too many (%lu) variable range
>> > MTRRs\n",
>> > - v, MASK_EXTR(hw_mtrr.msr_mtrr_cap,
>> > MTRRcap_VCNT));
>> > - return -EINVAL;
>> > - }
>> > + hvm_get_guest_pat(v, &hw_mtrr.msr_pat_cr);
>> >
>> > - hvm_get_guest_pat(v, &hw_mtrr.msr_pat_cr);
>> > + for ( i = 0; i < MASK_EXTR(hw_mtrr.msr_mtrr_cap,
>> > MTRRcap_VCNT); i++ )
>> > + {
>> > + /* save physbase */
>> > + hw_mtrr.msr_mtrr_var[i*2] =
>> > + ((uint64_t*)mtrr_state->var_ranges)[i*2];
>> > + /* save physmask */
>> > + hw_mtrr.msr_mtrr_var[i*2+1] =
>> > + ((uint64_t*)mtrr_state->var_ranges)[i*2+1];
>> > + }
>> As you move/re-indent code, please fix obvious style violations
>> (here: missing blanks around binary operators). It also would be
>> really nice if you got rid of the ugly and risky casts, and used
>> the actual structure fields instead.
>
> Couldn't we just use a
> memcpy(hw_mtrr.msr_mtrr_var, mtrr_state->var_ranges,
> MASK_EXTR(hw_mtrr.msr_mtrr_cap, MTRRcap_VCNT)) ?
Well, strictly speaking we could, but I'd prefer not to here since
the alternative is going to be readable, other than ...
> Same as you suggested...
>>
>> >
>> > - for ( i = 0; i < MASK_EXTR(hw_mtrr.msr_mtrr_cap,
>> > MTRRcap_VCNT); i++ )
>> > - {
>> > - /* save physbase */
>> > - hw_mtrr.msr_mtrr_var[i*2] =
>> > - ((uint64_t*)mtrr_state->var_ranges)[i*2];
>> > - /* save physmask */
>> > - hw_mtrr.msr_mtrr_var[i*2+1] =
>> > - ((uint64_t*)mtrr_state->var_ranges)[i*2+1];
>> > - }
>> > + for ( i = 0; i < NUM_FIXED_MSR; i++ )
>> > + hw_mtrr.msr_mtrr_fixed[i] =
>> > + ((uint64_t*)mtrr_state->fixed_ranges)[i];
>> Whereas here I think you would best simply use memcpy(), again
>> to get rid of the cast.
>>
> ... here
... here.
Jan
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
^ permalink raw reply [flat|nested] 33+ messages in thread
* [PATCH v14 07/11] x86/hvm: Introduce viridian_save_vcpu_ctxt_one() func
2018-07-25 12:14 [PATCH v14 00/11] x86/domctl: Save info for one vcpu instance Alexandru Isaila
` (5 preceding siblings ...)
2018-07-25 12:14 ` [PATCH v14 06/11] x86/hvm: Introduce hvm_save_mtrr_msr_one func Alexandru Isaila
@ 2018-07-25 12:14 ` Alexandru Isaila
2018-07-31 12:17 ` Jan Beulich
2018-07-25 12:14 ` [PATCH v14 08/11] x86/hvm: Add handler for save_one funcs Alexandru Isaila
` (4 subsequent siblings)
11 siblings, 1 reply; 33+ messages in thread
From: Alexandru Isaila @ 2018-07-25 12:14 UTC (permalink / raw)
To: xen-devel
Cc: wei.liu2, andrew.cooper3, ian.jackson, paul.durrant, jbeulich,
Alexandru Isaila
This is used to save data from a single instance.
Signed-off-by: Alexandru Isaila <aisaila@bitdefender.com>
Reviewed-by: Paul Durrant <paul.durrant@citrix.com>
---
Changes since V13:
- Fixed style.
---
xen/arch/x86/hvm/viridian.c | 30 +++++++++++++++++++-----------
1 file changed, 19 insertions(+), 11 deletions(-)
diff --git a/xen/arch/x86/hvm/viridian.c b/xen/arch/x86/hvm/viridian.c
index 694eae6..5e3b8ac 100644
--- a/xen/arch/x86/hvm/viridian.c
+++ b/xen/arch/x86/hvm/viridian.c
@@ -1026,24 +1026,32 @@ static int viridian_load_domain_ctxt(struct domain *d, hvm_domain_context_t *h)
HVM_REGISTER_SAVE_RESTORE(VIRIDIAN_DOMAIN, viridian_save_domain_ctxt,
viridian_load_domain_ctxt, 1, HVMSR_PER_DOM);
-static int viridian_save_vcpu_ctxt(struct domain *d, hvm_domain_context_t *h)
+static int viridian_save_vcpu_ctxt_one(struct vcpu *v, hvm_domain_context_t *h)
{
- struct vcpu *v;
+ struct hvm_viridian_vcpu_context ctxt = {};
- if ( !is_viridian_domain(d) )
+ if ( !is_viridian_domain(v->domain) )
return 0;
- for_each_vcpu( d, v ) {
- struct hvm_viridian_vcpu_context ctxt = {
- .vp_assist_msr = v->arch.hvm_vcpu.viridian.vp_assist.msr.raw,
- .vp_assist_pending = v->arch.hvm_vcpu.viridian.vp_assist.pending,
- };
+ ctxt.vp_assist_msr = v->arch.hvm_vcpu.viridian.vp_assist.msr.raw;
+ ctxt.vp_assist_pending = v->arch.hvm_vcpu.viridian.vp_assist.pending;
+
+ return hvm_save_entry(VIRIDIAN_VCPU, v->vcpu_id, h, &ctxt);
+}
+
+static int viridian_save_vcpu_ctxt(struct domain *d, hvm_domain_context_t *h)
+{
+ struct vcpu *v;
+ int err = 0;
- if ( hvm_save_entry(VIRIDIAN_VCPU, v->vcpu_id, h, &ctxt) != 0 )
- return 1;
+ for_each_vcpu ( d, v )
+ {
+ err = viridian_save_vcpu_ctxt_one(v, h);
+ if ( err )
+ break;
}
- return 0;
+ return err;
}
static int viridian_load_vcpu_ctxt(struct domain *d, hvm_domain_context_t *h)
--
2.7.4
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
^ permalink raw reply related [flat|nested] 33+ messages in thread* Re: [PATCH v14 07/11] x86/hvm: Introduce viridian_save_vcpu_ctxt_one() func
2018-07-25 12:14 ` [PATCH v14 07/11] x86/hvm: Introduce viridian_save_vcpu_ctxt_one() func Alexandru Isaila
@ 2018-07-31 12:17 ` Jan Beulich
0 siblings, 0 replies; 33+ messages in thread
From: Jan Beulich @ 2018-07-31 12:17 UTC (permalink / raw)
To: aisaila; +Cc: Andrew Cooper, Paul Durrant, Wei Liu, Ian Jackson, xen-devel
>>> On 25.07.18 at 14:14, <aisaila@bitdefender.com> wrote:
> --- a/xen/arch/x86/hvm/viridian.c
> +++ b/xen/arch/x86/hvm/viridian.c
> @@ -1026,24 +1026,32 @@ static int viridian_load_domain_ctxt(struct domain *d, hvm_domain_context_t *h)
> HVM_REGISTER_SAVE_RESTORE(VIRIDIAN_DOMAIN, viridian_save_domain_ctxt,
> viridian_load_domain_ctxt, 1, HVMSR_PER_DOM);
>
> -static int viridian_save_vcpu_ctxt(struct domain *d, hvm_domain_context_t *h)
> +static int viridian_save_vcpu_ctxt_one(struct vcpu *v, hvm_domain_context_t *h)
> {
> - struct vcpu *v;
> + struct hvm_viridian_vcpu_context ctxt = {};
>
> - if ( !is_viridian_domain(d) )
> + if ( !is_viridian_domain(v->domain) )
> return 0;
Despite this conditional I think it wouldn't be overly expensive to
the non-Viridian case to move ...
> - for_each_vcpu( d, v ) {
> - struct hvm_viridian_vcpu_context ctxt = {
> - .vp_assist_msr = v->arch.hvm_vcpu.viridian.vp_assist.msr.raw,
> - .vp_assist_pending = v->arch.hvm_vcpu.viridian.vp_assist.pending,
> - };
> + ctxt.vp_assist_msr = v->arch.hvm_vcpu.viridian.vp_assist.msr.raw;
> + ctxt.vp_assist_pending = v->arch.hvm_vcpu.viridian.vp_assist.pending;
... these back into ctxt's initializer.
Jan
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
^ permalink raw reply [flat|nested] 33+ messages in thread
* [PATCH v14 08/11] x86/hvm: Add handler for save_one funcs
2018-07-25 12:14 [PATCH v14 00/11] x86/domctl: Save info for one vcpu instance Alexandru Isaila
` (6 preceding siblings ...)
2018-07-25 12:14 ` [PATCH v14 07/11] x86/hvm: Introduce viridian_save_vcpu_ctxt_one() func Alexandru Isaila
@ 2018-07-25 12:14 ` Alexandru Isaila
2018-07-31 12:34 ` Jan Beulich
2018-07-25 12:14 ` [PATCH v14 09/11] x86/domctl: Don't pause the whole domain if only getting vcpu state Alexandru Isaila
` (3 subsequent siblings)
11 siblings, 1 reply; 33+ messages in thread
From: Alexandru Isaila @ 2018-07-25 12:14 UTC (permalink / raw)
To: xen-devel
Cc: wei.liu2, andrew.cooper3, ian.jackson, paul.durrant, jbeulich,
Alexandru Isaila
Signed-off-by: Alexandru Isaila <aisaila@bitdefender.com>
---
Changes since V8:
- Add comment for the handler return values.
---
xen/arch/x86/cpu/mcheck/vmce.c | 1 +
xen/arch/x86/hvm/hpet.c | 2 +-
xen/arch/x86/hvm/hvm.c | 6 +++++-
xen/arch/x86/hvm/i8254.c | 2 +-
xen/arch/x86/hvm/irq.c | 6 +++---
xen/arch/x86/hvm/mtrr.c | 4 ++--
xen/arch/x86/hvm/pmtimer.c | 2 +-
xen/arch/x86/hvm/rtc.c | 2 +-
xen/arch/x86/hvm/save.c | 5 ++++-
xen/arch/x86/hvm/vioapic.c | 2 +-
xen/arch/x86/hvm/viridian.c | 3 ++-
xen/arch/x86/hvm/vlapic.c | 4 ++--
xen/arch/x86/hvm/vpic.c | 2 +-
xen/include/asm-x86/hvm/save.h | 6 +++++-
14 files changed, 30 insertions(+), 17 deletions(-)
diff --git a/xen/arch/x86/cpu/mcheck/vmce.c b/xen/arch/x86/cpu/mcheck/vmce.c
index 31e553c..35044d7 100644
--- a/xen/arch/x86/cpu/mcheck/vmce.c
+++ b/xen/arch/x86/cpu/mcheck/vmce.c
@@ -396,6 +396,7 @@ static int vmce_load_vcpu_ctxt(struct domain *d, hvm_domain_context_t *h)
}
HVM_REGISTER_SAVE_RESTORE(VMCE_VCPU, vmce_save_vcpu_ctxt,
+ vmce_save_vcpu_ctxt_one,
vmce_load_vcpu_ctxt, 1, HVMSR_PER_VCPU);
/*
diff --git a/xen/arch/x86/hvm/hpet.c b/xen/arch/x86/hvm/hpet.c
index 2837709..aff8613 100644
--- a/xen/arch/x86/hvm/hpet.c
+++ b/xen/arch/x86/hvm/hpet.c
@@ -640,7 +640,7 @@ static int hpet_load(struct domain *d, hvm_domain_context_t *h)
return 0;
}
-HVM_REGISTER_SAVE_RESTORE(HPET, hpet_save, hpet_load, 1, HVMSR_PER_DOM);
+HVM_REGISTER_SAVE_RESTORE(HPET, hpet_save, NULL, hpet_load, 1, HVMSR_PER_DOM);
static void hpet_set(HPETState *h)
{
diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index a18b9a6..1246ed5 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -785,6 +785,7 @@ static int hvm_load_tsc_adjust(struct domain *d, hvm_domain_context_t *h)
}
HVM_REGISTER_SAVE_RESTORE(TSC_ADJUST, hvm_save_tsc_adjust,
+ hvm_save_tsc_adjust_one,
hvm_load_tsc_adjust, 1, HVMSR_PER_VCPU);
static int hvm_save_cpu_ctxt_one(struct vcpu *v, hvm_domain_context_t *h)
@@ -1181,7 +1182,8 @@ static int hvm_load_cpu_ctxt(struct domain *d, hvm_domain_context_t *h)
return 0;
}
-HVM_REGISTER_SAVE_RESTORE(CPU, hvm_save_cpu_ctxt, hvm_load_cpu_ctxt,
+HVM_REGISTER_SAVE_RESTORE(CPU, hvm_save_cpu_ctxt, hvm_save_cpu_ctxt_one,
+ hvm_load_cpu_ctxt,
1, HVMSR_PER_VCPU);
#define HVM_CPU_XSAVE_SIZE(xcr0) (offsetof(struct hvm_hw_cpu_xsave, \
@@ -1534,6 +1536,7 @@ static int __init hvm_register_CPU_save_and_restore(void)
hvm_register_savevm(CPU_XSAVE_CODE,
"CPU_XSAVE",
hvm_save_cpu_xsave_states,
+ hvm_save_cpu_xsave_states_one,
hvm_load_cpu_xsave_states,
HVM_CPU_XSAVE_SIZE(xfeature_mask) +
sizeof(struct hvm_save_descriptor),
@@ -1546,6 +1549,7 @@ static int __init hvm_register_CPU_save_and_restore(void)
hvm_register_savevm(CPU_MSR_CODE,
"CPU_MSR",
hvm_save_cpu_msrs,
+ hvm_save_cpu_msrs_one,
hvm_load_cpu_msrs,
HVM_CPU_MSR_SIZE(msr_count_max) +
sizeof(struct hvm_save_descriptor),
diff --git a/xen/arch/x86/hvm/i8254.c b/xen/arch/x86/hvm/i8254.c
index 992f08d..ec77b23 100644
--- a/xen/arch/x86/hvm/i8254.c
+++ b/xen/arch/x86/hvm/i8254.c
@@ -437,7 +437,7 @@ static int pit_load(struct domain *d, hvm_domain_context_t *h)
return 0;
}
-HVM_REGISTER_SAVE_RESTORE(PIT, pit_save, pit_load, 1, HVMSR_PER_DOM);
+HVM_REGISTER_SAVE_RESTORE(PIT, pit_save, NULL, pit_load, 1, HVMSR_PER_DOM);
void pit_reset(struct domain *d)
{
diff --git a/xen/arch/x86/hvm/irq.c b/xen/arch/x86/hvm/irq.c
index c85d004..770eab7 100644
--- a/xen/arch/x86/hvm/irq.c
+++ b/xen/arch/x86/hvm/irq.c
@@ -764,9 +764,9 @@ static int irq_load_link(struct domain *d, hvm_domain_context_t *h)
return 0;
}
-HVM_REGISTER_SAVE_RESTORE(PCI_IRQ, irq_save_pci, irq_load_pci,
+HVM_REGISTER_SAVE_RESTORE(PCI_IRQ, irq_save_pci, NULL, irq_load_pci,
1, HVMSR_PER_DOM);
-HVM_REGISTER_SAVE_RESTORE(ISA_IRQ, irq_save_isa, irq_load_isa,
+HVM_REGISTER_SAVE_RESTORE(ISA_IRQ, irq_save_isa, NULL, irq_load_isa,
1, HVMSR_PER_DOM);
-HVM_REGISTER_SAVE_RESTORE(PCI_LINK, irq_save_link, irq_load_link,
+HVM_REGISTER_SAVE_RESTORE(PCI_LINK, irq_save_link, NULL, irq_load_link,
1, HVMSR_PER_DOM);
diff --git a/xen/arch/x86/hvm/mtrr.c b/xen/arch/x86/hvm/mtrr.c
index 47a5c29..1cb2a2e 100644
--- a/xen/arch/x86/hvm/mtrr.c
+++ b/xen/arch/x86/hvm/mtrr.c
@@ -823,8 +823,8 @@ static int hvm_load_mtrr_msr(struct domain *d, hvm_domain_context_t *h)
return 0;
}
-HVM_REGISTER_SAVE_RESTORE(MTRR, hvm_save_mtrr_msr, hvm_load_mtrr_msr,
- 1, HVMSR_PER_VCPU);
+HVM_REGISTER_SAVE_RESTORE(MTRR, hvm_save_mtrr_msr, hvm_save_mtrr_msr_one,
+ hvm_load_mtrr_msr, 1, HVMSR_PER_VCPU);
void memory_type_changed(struct domain *d)
{
diff --git a/xen/arch/x86/hvm/pmtimer.c b/xen/arch/x86/hvm/pmtimer.c
index 435647f..0a5e8ce 100644
--- a/xen/arch/x86/hvm/pmtimer.c
+++ b/xen/arch/x86/hvm/pmtimer.c
@@ -309,7 +309,7 @@ static int acpi_load(struct domain *d, hvm_domain_context_t *h)
return 0;
}
-HVM_REGISTER_SAVE_RESTORE(PMTIMER, acpi_save, acpi_load,
+HVM_REGISTER_SAVE_RESTORE(PMTIMER, acpi_save, NULL, acpi_load,
1, HVMSR_PER_DOM);
int pmtimer_change_ioport(struct domain *d, unsigned int version)
diff --git a/xen/arch/x86/hvm/rtc.c b/xen/arch/x86/hvm/rtc.c
index cb75b99..ce7e71b 100644
--- a/xen/arch/x86/hvm/rtc.c
+++ b/xen/arch/x86/hvm/rtc.c
@@ -783,7 +783,7 @@ static int rtc_load(struct domain *d, hvm_domain_context_t *h)
return 0;
}
-HVM_REGISTER_SAVE_RESTORE(RTC, rtc_save, rtc_load, 1, HVMSR_PER_DOM);
+HVM_REGISTER_SAVE_RESTORE(RTC, rtc_save, NULL, rtc_load, 1, HVMSR_PER_DOM);
void rtc_reset(struct domain *d)
{
diff --git a/xen/arch/x86/hvm/save.c b/xen/arch/x86/hvm/save.c
index 8984a23..b674937 100644
--- a/xen/arch/x86/hvm/save.c
+++ b/xen/arch/x86/hvm/save.c
@@ -85,16 +85,18 @@ int arch_hvm_load(struct domain *d, struct hvm_save_header *hdr)
/* List of handlers for various HVM save and restore types */
static struct {
hvm_save_handler save;
+ hvm_save_one_handler save_one;
hvm_load_handler load;
const char *name;
size_t size;
int kind;
-} hvm_sr_handlers[HVM_SAVE_CODE_MAX + 1] = { {NULL, NULL, "<?>"}, };
+} hvm_sr_handlers[HVM_SAVE_CODE_MAX + 1] = { {NULL, NULL, NULL, "<?>"}, };
/* Init-time function to add entries to that list */
void __init hvm_register_savevm(uint16_t typecode,
const char *name,
hvm_save_handler save_state,
+ hvm_save_one_handler save_one,
hvm_load_handler load_state,
size_t size, int kind)
{
@@ -102,6 +104,7 @@ void __init hvm_register_savevm(uint16_t typecode,
ASSERT(hvm_sr_handlers[typecode].save == NULL);
ASSERT(hvm_sr_handlers[typecode].load == NULL);
hvm_sr_handlers[typecode].save = save_state;
+ hvm_sr_handlers[typecode].save_one = save_one;
hvm_sr_handlers[typecode].load = load_state;
hvm_sr_handlers[typecode].name = name;
hvm_sr_handlers[typecode].size = size;
diff --git a/xen/arch/x86/hvm/vioapic.c b/xen/arch/x86/hvm/vioapic.c
index 97b419f..66f54e4 100644
--- a/xen/arch/x86/hvm/vioapic.c
+++ b/xen/arch/x86/hvm/vioapic.c
@@ -601,7 +601,7 @@ static int ioapic_load(struct domain *d, hvm_domain_context_t *h)
return hvm_load_entry(IOAPIC, h, &s->domU);
}
-HVM_REGISTER_SAVE_RESTORE(IOAPIC, ioapic_save, ioapic_load, 1, HVMSR_PER_DOM);
+HVM_REGISTER_SAVE_RESTORE(IOAPIC, ioapic_save, NULL, ioapic_load, 1, HVMSR_PER_DOM);
void vioapic_reset(struct domain *d)
{
diff --git a/xen/arch/x86/hvm/viridian.c b/xen/arch/x86/hvm/viridian.c
index 5e3b8ac..f72c8a9 100644
--- a/xen/arch/x86/hvm/viridian.c
+++ b/xen/arch/x86/hvm/viridian.c
@@ -1023,7 +1023,7 @@ static int viridian_load_domain_ctxt(struct domain *d, hvm_domain_context_t *h)
return 0;
}
-HVM_REGISTER_SAVE_RESTORE(VIRIDIAN_DOMAIN, viridian_save_domain_ctxt,
+HVM_REGISTER_SAVE_RESTORE(VIRIDIAN_DOMAIN, viridian_save_domain_ctxt, NULL,
viridian_load_domain_ctxt, 1, HVMSR_PER_DOM);
static int viridian_save_vcpu_ctxt_one(struct vcpu *v, hvm_domain_context_t *h)
@@ -1085,6 +1085,7 @@ static int viridian_load_vcpu_ctxt(struct domain *d, hvm_domain_context_t *h)
}
HVM_REGISTER_SAVE_RESTORE(VIRIDIAN_VCPU, viridian_save_vcpu_ctxt,
+ viridian_save_vcpu_ctxt_one,
viridian_load_vcpu_ctxt, 1, HVMSR_PER_VCPU);
static int __init parse_viridian_version(const char *arg)
diff --git a/xen/arch/x86/hvm/vlapic.c b/xen/arch/x86/hvm/vlapic.c
index 1b9f00a..eff6070 100644
--- a/xen/arch/x86/hvm/vlapic.c
+++ b/xen/arch/x86/hvm/vlapic.c
@@ -1576,9 +1576,9 @@ static int lapic_load_regs(struct domain *d, hvm_domain_context_t *h)
return 0;
}
-HVM_REGISTER_SAVE_RESTORE(LAPIC, lapic_save_hidden, lapic_load_hidden,
+HVM_REGISTER_SAVE_RESTORE(LAPIC, lapic_save_hidden, NULL, lapic_load_hidden,
1, HVMSR_PER_VCPU);
-HVM_REGISTER_SAVE_RESTORE(LAPIC_REGS, lapic_save_regs, lapic_load_regs,
+HVM_REGISTER_SAVE_RESTORE(LAPIC_REGS, lapic_save_regs, NULL, lapic_load_regs,
1, HVMSR_PER_VCPU);
int vlapic_init(struct vcpu *v)
diff --git a/xen/arch/x86/hvm/vpic.c b/xen/arch/x86/hvm/vpic.c
index e160bbd..ca9b4cb 100644
--- a/xen/arch/x86/hvm/vpic.c
+++ b/xen/arch/x86/hvm/vpic.c
@@ -411,7 +411,7 @@ static int vpic_load(struct domain *d, hvm_domain_context_t *h)
return 0;
}
-HVM_REGISTER_SAVE_RESTORE(PIC, vpic_save, vpic_load, 2, HVMSR_PER_DOM);
+HVM_REGISTER_SAVE_RESTORE(PIC, vpic_save, NULL, vpic_load, 2, HVMSR_PER_DOM);
void vpic_reset(struct domain *d)
{
diff --git a/xen/include/asm-x86/hvm/save.h b/xen/include/asm-x86/hvm/save.h
index f889e8f..2538628 100644
--- a/xen/include/asm-x86/hvm/save.h
+++ b/xen/include/asm-x86/hvm/save.h
@@ -97,6 +97,8 @@ static inline uint16_t hvm_load_instance(struct hvm_domain_context *h)
* restoring. Both return non-zero on error. */
typedef int (*hvm_save_handler) (struct domain *d,
hvm_domain_context_t *h);
+typedef int (*hvm_save_one_handler)(struct vcpu *v,
+ hvm_domain_context_t *h);
typedef int (*hvm_load_handler) (struct domain *d,
hvm_domain_context_t *h);
@@ -105,6 +107,7 @@ typedef int (*hvm_load_handler) (struct domain *d,
void hvm_register_savevm(uint16_t typecode,
const char *name,
hvm_save_handler save_state,
+ hvm_save_one_handler save_one,
hvm_load_handler load_state,
size_t size, int kind);
@@ -114,12 +117,13 @@ void hvm_register_savevm(uint16_t typecode,
/* Syntactic sugar around that function: specify the max number of
* saves, and this calculates the size of buffer needed */
-#define HVM_REGISTER_SAVE_RESTORE(_x, _save, _load, _num, _k) \
+#define HVM_REGISTER_SAVE_RESTORE(_x, _save, _save_one, _load, _num, _k) \
static int __init __hvm_register_##_x##_save_and_restore(void) \
{ \
hvm_register_savevm(HVM_SAVE_CODE(_x), \
#_x, \
&_save, \
+ _save_one, \
&_load, \
(_num) * (HVM_SAVE_LENGTH(_x) \
+ sizeof (struct hvm_save_descriptor)), \
--
2.7.4
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
^ permalink raw reply related [flat|nested] 33+ messages in thread* Re: [PATCH v14 08/11] x86/hvm: Add handler for save_one funcs
2018-07-25 12:14 ` [PATCH v14 08/11] x86/hvm: Add handler for save_one funcs Alexandru Isaila
@ 2018-07-31 12:34 ` Jan Beulich
2018-07-31 12:55 ` Isaila Alexandru
0 siblings, 1 reply; 33+ messages in thread
From: Jan Beulich @ 2018-07-31 12:34 UTC (permalink / raw)
To: aisaila; +Cc: Andrew Cooper, Paul Durrant, Wei Liu, Ian Jackson, xen-devel
>>> On 25.07.18 at 14:14, <aisaila@bitdefender.com> wrote:
> --- a/xen/arch/x86/hvm/save.c
> +++ b/xen/arch/x86/hvm/save.c
> @@ -85,16 +85,18 @@ int arch_hvm_load(struct domain *d, struct hvm_save_header *hdr)
> /* List of handlers for various HVM save and restore types */
> static struct {
> hvm_save_handler save;
> + hvm_save_one_handler save_one;
> hvm_load_handler load;
> const char *name;
> size_t size;
> int kind;
> -} hvm_sr_handlers[HVM_SAVE_CODE_MAX + 1] = { {NULL, NULL, "<?>"}, };
> +} hvm_sr_handlers[HVM_SAVE_CODE_MAX + 1] = { {NULL, NULL, NULL, "<?>"}, };
This initializer looks flawed, and I'd appreciate if you could fix it as
you have to touch it anyway: It only sets .name of array entry 0
to a non-NULL string. Either this setting is not needed in the first
place (as looks to be the case), or you'll want to initialize all array
entries the same. Use the C99 (GNU extension in C89) for that
purpose. Perhaps simply dropping the initializer could be prereq
patch which could go in right away.
> --- a/xen/arch/x86/hvm/vlapic.c
> +++ b/xen/arch/x86/hvm/vlapic.c
> @@ -1576,9 +1576,9 @@ static int lapic_load_regs(struct domain *d, hvm_domain_context_t *h)
> return 0;
> }
>
> -HVM_REGISTER_SAVE_RESTORE(LAPIC, lapic_save_hidden, lapic_load_hidden,
> +HVM_REGISTER_SAVE_RESTORE(LAPIC, lapic_save_hidden, NULL, lapic_load_hidden,
> 1, HVMSR_PER_VCPU);
> -HVM_REGISTER_SAVE_RESTORE(LAPIC_REGS, lapic_save_regs, lapic_load_regs,
> +HVM_REGISTER_SAVE_RESTORE(LAPIC_REGS, lapic_save_regs, NULL, lapic_load_regs,
These are per-vCPU as well - why do they get NULL inserted here,
rather than there being another (two) prereq patch(es)?
> --- a/xen/include/asm-x86/hvm/save.h
> +++ b/xen/include/asm-x86/hvm/save.h
> @@ -97,6 +97,8 @@ static inline uint16_t hvm_load_instance(struct hvm_domain_context *h)
> * restoring. Both return non-zero on error. */
> typedef int (*hvm_save_handler) (struct domain *d,
> hvm_domain_context_t *h);
> +typedef int (*hvm_save_one_handler)(struct vcpu *v,
> + hvm_domain_context_t *h);
I don't think "one" is a valid part of the name here: PIC has
multiple instances too (and eventually IO-APIC will), yet they're
not to be managed this way. I think you want to use "vcpu"
instead.
> @@ -114,12 +117,13 @@ void hvm_register_savevm(uint16_t typecode,
>
> /* Syntactic sugar around that function: specify the max number of
> * saves, and this calculates the size of buffer needed */
> -#define HVM_REGISTER_SAVE_RESTORE(_x, _save, _load, _num, _k) \
> +#define HVM_REGISTER_SAVE_RESTORE(_x, _save, _save_one, _load, _num, _k) \
> static int __init __hvm_register_##_x##_save_and_restore(void) \
> { \
> hvm_register_savevm(HVM_SAVE_CODE(_x), \
> #_x, \
> &_save, \
> + _save_one, \
While I generally appreciate the omission of the &, I'd
prefer if you added it for consistency with the neighboring
lines.
Jan
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
^ permalink raw reply [flat|nested] 33+ messages in thread* Re: [PATCH v14 08/11] x86/hvm: Add handler for save_one funcs
2018-07-31 12:34 ` Jan Beulich
@ 2018-07-31 12:55 ` Isaila Alexandru
2018-07-31 13:32 ` Jan Beulich
0 siblings, 1 reply; 33+ messages in thread
From: Isaila Alexandru @ 2018-07-31 12:55 UTC (permalink / raw)
To: Jan Beulich; +Cc: Andrew Cooper, Paul Durrant, Wei Liu, Ian Jackson, xen-devel
On Ma, 2018-07-31 at 06:34 -0600, Jan Beulich wrote:
> >
> > >
> > > >
> > > > On 25.07.18 at 14:14, <aisaila@bitdefender.com> wrote:
> > --- a/xen/arch/x86/hvm/save.c
> > +++ b/xen/arch/x86/hvm/save.c
> > @@ -85,16 +85,18 @@ int arch_hvm_load(struct domain *d, struct
> > hvm_save_header *hdr)
> > /* List of handlers for various HVM save and restore types */
> > static struct {
> > hvm_save_handler save;
> > + hvm_save_one_handler save_one;
> > hvm_load_handler load;
> > const char *name;
> > size_t size;
> > int kind;
> > -} hvm_sr_handlers[HVM_SAVE_CODE_MAX + 1] = { {NULL, NULL, "<?>"},
> > };
> > +} hvm_sr_handlers[HVM_SAVE_CODE_MAX + 1] = { {NULL, NULL, NULL,
> > "<?>"}, };
> This initializer looks flawed, and I'd appreciate if you could fix it
> as
> you have to touch it anyway: It only sets .name of array entry 0
> to a non-NULL string. Either this setting is not needed in the first
> place (as looks to be the case), or you'll want to initialize all
> array
> entries the same. Use the C99 (GNU extension in C89) for that
> purpose. Perhaps simply dropping the initializer could be prereq
> patch which could go in right away.
>
> >
> > --- a/xen/arch/x86/hvm/vlapic.c
> > +++ b/xen/arch/x86/hvm/vlapic.c
> > @@ -1576,9 +1576,9 @@ static int lapic_load_regs(struct domain *d,
> > hvm_domain_context_t *h)
> > return 0;
> > }
> >
> > -HVM_REGISTER_SAVE_RESTORE(LAPIC, lapic_save_hidden,
> > lapic_load_hidden,
> > +HVM_REGISTER_SAVE_RESTORE(LAPIC, lapic_save_hidden, NULL,
> > lapic_load_hidden,
> > 1, HVMSR_PER_VCPU);
> > -HVM_REGISTER_SAVE_RESTORE(LAPIC_REGS, lapic_save_regs,
> > lapic_load_regs,
> > +HVM_REGISTER_SAVE_RESTORE(LAPIC_REGS, lapic_save_regs, NULL,
> > lapic_load_regs,
> These are per-vCPU as well - why do they get NULL inserted here,
> rather than there being another (two) prereq patch(es)?
Both LAPIC save functions have for for (vcpu) so the look like a
save_one function already, no need to do anything here.
>
> >
> > --- a/xen/include/asm-x86/hvm/save.h
> > +++ b/xen/include/asm-x86/hvm/save.h
> > @@ -97,6 +97,8 @@ static inline uint16_t hvm_load_instance(struct
> > hvm_domain_context *h)
> > * restoring. Both return non-zero on error. */
> > typedef int (*hvm_save_handler) (struct domain *d,
> > hvm_domain_context_t *h);
> > +typedef int (*hvm_save_one_handler)(struct vcpu *v,
> > + hvm_domain_context_t *h);
> I don't think "one" is a valid part of the name here: PIC has
> multiple instances too (and eventually IO-APIC will), yet they're
> not to be managed this way. I think you want to use "vcpu"
> instead.
>
> >
> > @@ -114,12 +117,13 @@ void hvm_register_savevm(uint16_t typecode,
> >
> > /* Syntactic sugar around that function: specify the max number of
> > * saves, and this calculates the size of buffer needed */
> > -#define HVM_REGISTER_SAVE_RESTORE(_x, _save, _load, _num,
> > _k) \
> > +#define HVM_REGISTER_SAVE_RESTORE(_x, _save, _save_one, _load,
> > _num, _k) \
> > static int __init
> > __hvm_register_##_x##_save_and_restore(void) \
> > {
> > \
> > hvm_register_savevm(HVM_SAVE_CODE(_x),
> > \
> > #_x,
> > \
> > &_save,
> > \
> > + _save_one,
> > \
> While I generally appreciate the omission of the &, I'd
> prefer if you added it for consistency with the neighboring
> lines.
This was done so we can add NULL in the places that do not have
save_one functions.
Thanks,
Alex
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
^ permalink raw reply [flat|nested] 33+ messages in thread* Re: [PATCH v14 08/11] x86/hvm: Add handler for save_one funcs
2018-07-31 12:55 ` Isaila Alexandru
@ 2018-07-31 13:32 ` Jan Beulich
2018-07-31 13:45 ` Isaila Alexandru
0 siblings, 1 reply; 33+ messages in thread
From: Jan Beulich @ 2018-07-31 13:32 UTC (permalink / raw)
To: aisaila; +Cc: Andrew Cooper, Paul Durrant, Wei Liu, Ian Jackson, xen-devel
>>> On 31.07.18 at 14:55, <aisaila@bitdefender.com> wrote:
> On Ma, 2018-07-31 at 06:34 -0600, Jan Beulich wrote:
>> > > > On 25.07.18 at 14:14, <aisaila@bitdefender.com> wrote:
>> > --- a/xen/arch/x86/hvm/vlapic.c
>> > +++ b/xen/arch/x86/hvm/vlapic.c
>> > @@ -1576,9 +1576,9 @@ static int lapic_load_regs(struct domain *d,
>> > hvm_domain_context_t *h)
>> > return 0;
>> > }
>> >
>> > -HVM_REGISTER_SAVE_RESTORE(LAPIC, lapic_save_hidden,
>> > lapic_load_hidden,
>> > +HVM_REGISTER_SAVE_RESTORE(LAPIC, lapic_save_hidden, NULL,
>> > lapic_load_hidden,
>> > 1, HVMSR_PER_VCPU);
>> > -HVM_REGISTER_SAVE_RESTORE(LAPIC_REGS, lapic_save_regs,
>> > lapic_load_regs,
>> > +HVM_REGISTER_SAVE_RESTORE(LAPIC_REGS, lapic_save_regs, NULL,
>> > lapic_load_regs,
>> These are per-vCPU as well - why do they get NULL inserted here,
>> rather than there being another (two) prereq patch(es)?
>
> Both LAPIC save functions have for for (vcpu) so the look like a
> save_one function already, no need to do anything here.
Quite the opposite - presence of a loop over all vCPU-s clearly
says they're not save-one functions. Otherwise you wouldn't
have found the need to touch the functions the way you do in
patch 10.
>> > @@ -114,12 +117,13 @@ void hvm_register_savevm(uint16_t typecode,
>> >
>> > /* Syntactic sugar around that function: specify the max number of
>> > * saves, and this calculates the size of buffer needed */
>> > -#define HVM_REGISTER_SAVE_RESTORE(_x, _save, _load, _num,
>> > _k) \
>> > +#define HVM_REGISTER_SAVE_RESTORE(_x, _save, _save_one, _load,
>> > _num, _k) \
>> > static int __init
>> > __hvm_register_##_x##_save_and_restore(void) \
>> > {
>> > \
>> > hvm_register_savevm(HVM_SAVE_CODE(_x),
>> > \
>> > #_x,
>> > \
>> > &_save,
>> > \
>> > + _save_one,
>> > \
>> While I generally appreciate the omission of the &, I'd
>> prefer if you added it for consistency with the neighboring
>> lines.
>
> This was done so we can add NULL in the places that do not have
> save_one functions.
??? (I cannot connect your response to my remark.)
Jan
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
^ permalink raw reply [flat|nested] 33+ messages in thread* Re: [PATCH v14 08/11] x86/hvm: Add handler for save_one funcs
2018-07-31 13:32 ` Jan Beulich
@ 2018-07-31 13:45 ` Isaila Alexandru
2018-07-31 14:22 ` Jan Beulich
0 siblings, 1 reply; 33+ messages in thread
From: Isaila Alexandru @ 2018-07-31 13:45 UTC (permalink / raw)
To: Jan Beulich; +Cc: Andrew Cooper, Paul Durrant, Wei Liu, Ian Jackson, xen-devel
On Ma, 2018-07-31 at 07:32 -0600, Jan Beulich wrote:
> >
> > >
> > > >
> > > > On 31.07.18 at 14:55, <aisaila@bitdefender.com> wrote:
> > On Ma, 2018-07-31 at 06:34 -0600, Jan Beulich wrote:
> > >
> > > >
> > > > >
> > > > > >
> > > > > > On 25.07.18 at 14:14, <aisaila@bitdefender.com> wrote:
> > > > --- a/xen/arch/x86/hvm/vlapic.c
> > > > +++ b/xen/arch/x86/hvm/vlapic.c
> > > > @@ -1576,9 +1576,9 @@ static int lapic_load_regs(struct domain
> > > > *d,
> > > > hvm_domain_context_t *h)
> > > > return 0;
> > > > }
> > > >
> > > > -HVM_REGISTER_SAVE_RESTORE(LAPIC, lapic_save_hidden,
> > > > lapic_load_hidden,
> > > > +HVM_REGISTER_SAVE_RESTORE(LAPIC, lapic_save_hidden, NULL,
> > > > lapic_load_hidden,
> > > > 1, HVMSR_PER_VCPU);
> > > > -HVM_REGISTER_SAVE_RESTORE(LAPIC_REGS, lapic_save_regs,
> > > > lapic_load_regs,
> > > > +HVM_REGISTER_SAVE_RESTORE(LAPIC_REGS, lapic_save_regs, NULL,
> > > > lapic_load_regs,
> > > These are per-vCPU as well - why do they get NULL inserted here,
> > > rather than there being another (two) prereq patch(es)?
> > Both LAPIC save functions have for for (vcpu) so the look like a
> > save_one function already, no need to do anything here.
> Quite the opposite - presence of a loop over all vCPU-s clearly
> says they're not save-one functions. Otherwise you wouldn't
> have found the need to touch the functions the way you do in
> patch 10.
>
> >
> > >
> > > >
> > > > @@ -114,12 +117,13 @@ void hvm_register_savevm(uint16_t
> > > > typecode,
> > > >
> > > > /* Syntactic sugar around that function: specify the max
> > > > number of
> > > > * saves, and this calculates the size of buffer needed */
> > > > -#define HVM_REGISTER_SAVE_RESTORE(_x, _save, _load, _num,
> > > > _k) \
> > > > +#define HVM_REGISTER_SAVE_RESTORE(_x, _save, _save_one, _load,
> > > > _num, _k) \
> > > > static int __init
> > > > __hvm_register_##_x##_save_and_restore(void) \
> > > > {
> > > >
> > > > \
> > > > hvm_register_savevm(HVM_SAVE_CODE(_x),
> > > >
> > > > \
> > > > #_x,
> > > >
> > > > \
> > > > &_save,
> > > >
> > > > \
> > > > + _save_one,
> > > >
> > > > \
> > > While I generally appreciate the omission of the &, I'd
> > > prefer if you added it for consistency with the neighboring
> > > lines.
> > This was done so we can add NULL in the places that do not have
> > save_one functions.
> ??? (I cannot connect your response to my remark.)
>
If there is &_save_one then it will not compile if there is any call
with a NULL.
hpet.c: In function ‘__hvm_register_HPET_save_and_restore’:
/home/aisaila/work/xen/xen/include/asm/hvm/save.h:126:25: error: lvalue
required as unary ‘&’ operand
&_save_one,
\
^
hpet.c:643:1: note: in expansion of macro ‘HVM_REGISTER_SAVE_RESTORE’
HVM_REGISTER_SAVE_RESTORE(HPET, hpet_save, NULL, hpet_load, 1,
HVMSR_PER_DOM);
Alex
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
^ permalink raw reply [flat|nested] 33+ messages in thread* Re: [PATCH v14 08/11] x86/hvm: Add handler for save_one funcs
2018-07-31 13:45 ` Isaila Alexandru
@ 2018-07-31 14:22 ` Jan Beulich
0 siblings, 0 replies; 33+ messages in thread
From: Jan Beulich @ 2018-07-31 14:22 UTC (permalink / raw)
To: aisaila; +Cc: Andrew Cooper, Paul Durrant, Wei Liu, Ian Jackson, xen-devel
>>> On 31.07.18 at 15:45, <aisaila@bitdefender.com> wrote:
> On Ma, 2018-07-31 at 07:32 -0600, Jan Beulich wrote:
>> > > > On 31.07.18 at 14:55, <aisaila@bitdefender.com> wrote:
>> > On Ma, 2018-07-31 at 06:34 -0600, Jan Beulich wrote:
>> > > > > > On 25.07.18 at 14:14, <aisaila@bitdefender.com> wrote:
>> > > > @@ -114,12 +117,13 @@ void hvm_register_savevm(uint16_t
>> > > > typecode,
>> > > >
>> > > > /* Syntactic sugar around that function: specify the max
>> > > > number of
>> > > > * saves, and this calculates the size of buffer needed */
>> > > > -#define HVM_REGISTER_SAVE_RESTORE(_x, _save, _load, _num,
>> > > > _k) \
>> > > > +#define HVM_REGISTER_SAVE_RESTORE(_x, _save, _save_one, _load,
>> > > > _num, _k) \
>> > > > static int __init
>> > > > __hvm_register_##_x##_save_and_restore(void) \
>> > > > {
>> > > >
>> > > > \
>> > > > hvm_register_savevm(HVM_SAVE_CODE(_x),
>> > > >
>> > > > \
>> > > > #_x,
>> > > >
>> > > > \
>> > > > &_save,
>> > > >
>> > > > \
>> > > > + _save_one,
>> > > >
>> > > > \
>> > > While I generally appreciate the omission of the &, I'd
>> > > prefer if you added it for consistency with the neighboring
>> > > lines.
>> > This was done so we can add NULL in the places that do not have
>> > save_one functions.
>> ??? (I cannot connect your response to my remark.)
>>
> If there is &_save_one then it will not compile if there is any call
> with a NULL.
>
> hpet.c: In function ‘__hvm_register_HPET_save_and_restore’:
> /home/aisaila/work/xen/xen/include/asm/hvm/save.h:126:25: error: lvalue
> required as unary ‘&’ operand
> &_save_one,
> \
> ^
> hpet.c:643:1: note: in expansion of macro ‘HVM_REGISTER_SAVE_RESTORE’
> HVM_REGISTER_SAVE_RESTORE(HPET, hpet_save, NULL, hpet_load, 1,
> HVMSR_PER_DOM);
Oh, I'm sorry, that is quite obvious indeed.
Jan
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
^ permalink raw reply [flat|nested] 33+ messages in thread
* [PATCH v14 09/11] x86/domctl: Don't pause the whole domain if only getting vcpu state
2018-07-25 12:14 [PATCH v14 00/11] x86/domctl: Save info for one vcpu instance Alexandru Isaila
` (7 preceding siblings ...)
2018-07-25 12:14 ` [PATCH v14 08/11] x86/hvm: Add handler for save_one funcs Alexandru Isaila
@ 2018-07-25 12:14 ` Alexandru Isaila
2018-07-31 13:08 ` Jan Beulich
2018-07-25 12:14 ` [PATCH v14 10/11] x86/hvm: Remove redundant save functions Alexandru Isaila
` (2 subsequent siblings)
11 siblings, 1 reply; 33+ messages in thread
From: Alexandru Isaila @ 2018-07-25 12:14 UTC (permalink / raw)
To: xen-devel
Cc: wei.liu2, andrew.cooper3, ian.jackson, paul.durrant, jbeulich,
Alexandru Isaila
This patch is focused on moving the for loop to the caller so
now we can save info for a single vcpu instance with the save_one
handlers.
Signed-off-by: Alexandru Isaila <aisaila@bitdefender.com>
---
Changes since V11:
- Changed the CONTINUE return to return 0.
---
xen/arch/x86/hvm/hvm.c | 19 ++++---
xen/arch/x86/hvm/save.c | 137 +++++++++++++++++++++++++++++++++++++-----------
2 files changed, 116 insertions(+), 40 deletions(-)
diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 1246ed5..f140305 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -793,6 +793,14 @@ static int hvm_save_cpu_ctxt_one(struct vcpu *v, hvm_domain_context_t *h)
struct segment_register seg;
struct hvm_hw_cpu ctxt = {};
+ /*
+ * We don't need to save state for a vcpu that is down; the restore
+ * code will leave it down if there is nothing saved.
+ */
+ if ( v->pause_flags & VPF_down )
+ return 0;
+
+
/* Architecture-specific vmcs/vmcb bits */
hvm_funcs.save_cpu_ctxt(v, &ctxt);
@@ -897,13 +905,6 @@ static int hvm_save_cpu_ctxt(struct domain *d, hvm_domain_context_t *h)
for_each_vcpu ( d, v )
{
- /*
- * We don't need to save state for a vcpu that is down; the restore
- * code will leave it down if there is nothing saved.
- */
- if ( v->pause_flags & VPF_down )
- continue;
-
err = hvm_save_cpu_ctxt_one(v, h);
if ( err )
break;
@@ -1196,7 +1197,7 @@ static int hvm_save_cpu_xsave_states_one(struct vcpu *v, hvm_domain_context_t *h
unsigned int size = HVM_CPU_XSAVE_SIZE(v->arch.xcr0_accum);
int err = 0;
- if ( !cpu_has_xsave )
+ if ( !cpu_has_xsave || !xsave_enabled(v) )
return 0; /* do nothing */
err = _hvm_init_entry(h, CPU_XSAVE_CODE, v->vcpu_id, size);
@@ -1221,8 +1222,6 @@ static int hvm_save_cpu_xsave_states(struct domain *d, hvm_domain_context_t *h)
for_each_vcpu ( d, v )
{
- if ( !xsave_enabled(v) )
- continue;
err = hvm_save_cpu_xsave_states_one(v, h);
if ( err )
break;
diff --git a/xen/arch/x86/hvm/save.c b/xen/arch/x86/hvm/save.c
index b674937..d57648d 100644
--- a/xen/arch/x86/hvm/save.c
+++ b/xen/arch/x86/hvm/save.c
@@ -138,9 +138,12 @@ size_t hvm_save_size(struct domain *d)
int hvm_save_one(struct domain *d, unsigned int typecode, unsigned int instance,
XEN_GUEST_HANDLE_64(uint8) handle, uint64_t *bufsz)
{
- int rv;
+ int rv = 0;
hvm_domain_context_t ctxt = { };
const struct hvm_save_descriptor *desc;
+ bool is_single_instance = false;
+ uint32_t off = 0;
+ struct vcpu *v;
if ( d->is_dying ||
typecode > HVM_SAVE_CODE_MAX ||
@@ -148,43 +151,94 @@ int hvm_save_one(struct domain *d, unsigned int typecode, unsigned int instance,
!hvm_sr_handlers[typecode].save )
return -EINVAL;
+ if ( hvm_sr_handlers[typecode].kind == HVMSR_PER_VCPU &&
+ instance < d->max_vcpus )
+ is_single_instance = true;
+
ctxt.size = hvm_sr_handlers[typecode].size;
- if ( hvm_sr_handlers[typecode].kind == HVMSR_PER_VCPU )
+ if ( hvm_sr_handlers[typecode].kind == HVMSR_PER_VCPU &&
+ instance == d->max_vcpus )
ctxt.size *= d->max_vcpus;
ctxt.data = xmalloc_bytes(ctxt.size);
if ( !ctxt.data )
return -ENOMEM;
- if ( (rv = hvm_sr_handlers[typecode].save(d, &ctxt)) != 0 )
- printk(XENLOG_G_ERR "HVM%d save: failed to save type %"PRIu16" (%d)\n",
- d->domain_id, typecode, rv);
- else if ( rv = -ENOENT, ctxt.cur >= sizeof(*desc) )
+ if ( is_single_instance )
+ vcpu_pause(d->vcpu[instance]);
+ else
+ domain_pause(d);
+
+ if ( is_single_instance )
{
- uint32_t off;
+ if ( hvm_sr_handlers[typecode].save_one != NULL )
+ rv = hvm_sr_handlers[typecode].save_one(d->vcpu[instance],
+ &ctxt);
+ else
+ rv = hvm_sr_handlers[typecode].save(d, &ctxt);
- for ( off = 0; off <= (ctxt.cur - sizeof(*desc)); off += desc->length )
+ if ( rv != 0 )
{
- desc = (void *)(ctxt.data + off);
- /* Move past header */
- off += sizeof(*desc);
- if ( ctxt.cur < desc->length ||
- off > ctxt.cur - desc->length )
- break;
- if ( instance == desc->instance )
- {
- rv = 0;
- if ( guest_handle_is_null(handle) )
- *bufsz = desc->length;
- else if ( *bufsz < desc->length )
- rv = -ENOBUFS;
- else if ( copy_to_guest(handle, ctxt.data + off, desc->length) )
- rv = -EFAULT;
- else
- *bufsz = desc->length;
- break;
- }
+ printk(XENLOG_G_ERR "HVM%d save: failed to save type %"PRIu16" (%d)\n",
+ d->domain_id, typecode, rv);
+ vcpu_unpause(d->vcpu[instance]);
+ }
+ else if ( ctxt.cur >= sizeof(*desc) )
+ {
+ rv = -ENOENT;
+ desc = (void *)(ctxt.data);
+ /* Move past header */
+ off = sizeof(*desc);
+ if ( ctxt.cur < desc->length ||
+ off > ctxt.cur - desc->length )
+ rv = -EFAULT;
+ rv = 0;
+ if ( guest_handle_is_null(handle) )
+ *bufsz = desc->length;
+ else if ( *bufsz < desc->length )
+ rv = -ENOBUFS;
+ else if ( copy_to_guest(handle, ctxt.data + off, desc->length) )
+ rv = -EFAULT;
+ else
+ *bufsz = desc->length;
+ vcpu_unpause(d->vcpu[instance]);
}
}
+ else
+ {
+ for_each_vcpu ( d, v )
+ {
+ if ( (rv = hvm_sr_handlers[typecode].save(d, &ctxt)) != 0 )
+ {
+ printk(XENLOG_G_ERR "HVM%d save: failed to save type %"PRIu16" (%d)\n",
+ d->domain_id, typecode, rv);
+ }
+ else if ( ctxt.cur >= sizeof(*desc) )
+ {
+ rv = -ENOENT;
+ desc = (void *)(ctxt.data + off);
+ /* Move past header */
+ off += sizeof(*desc);
+ if ( ctxt.cur < desc->length ||
+ off > ctxt.cur - desc->length )
+ break;
+ if ( instance == desc->instance )
+ {
+ rv = 0;
+ if ( guest_handle_is_null(handle) )
+ *bufsz = desc->length;
+ else if ( *bufsz < desc->length )
+ rv = -ENOBUFS;
+ else if ( copy_to_guest(handle, ctxt.data + off, desc->length) )
+ rv = -EFAULT;
+ else
+ *bufsz = desc->length;
+ break;
+ }
+ off += desc->length;
+ }
+ }
+ domain_unpause(d);
+ }
xfree(ctxt.data);
return rv;
@@ -196,7 +250,9 @@ int hvm_save(struct domain *d, hvm_domain_context_t *h)
struct hvm_save_header hdr;
struct hvm_save_end end;
hvm_save_handler handler;
- unsigned int i;
+ hvm_save_one_handler save_one_handler;
+ unsigned int i, rc;
+ struct vcpu *v = NULL;
if ( d->is_dying )
return -EINVAL;
@@ -224,11 +280,32 @@ int hvm_save(struct domain *d, hvm_domain_context_t *h)
for ( i = 0; i <= HVM_SAVE_CODE_MAX; i++ )
{
handler = hvm_sr_handlers[i].save;
- if ( handler != NULL )
+ save_one_handler = hvm_sr_handlers[i].save_one;
+ if ( save_one_handler != NULL )
{
printk(XENLOG_G_INFO "HVM%d save: %s\n",
d->domain_id, hvm_sr_handlers[i].name);
- if ( handler(d, h) != 0 )
+ for_each_vcpu ( d, v )
+ {
+ rc = save_one_handler(v, h);
+
+ if( rc != 0 )
+ {
+ printk(XENLOG_G_ERR
+ "HVM%d save: failed to save type %"PRIu16"\n",
+ d->domain_id, i);
+ return -EFAULT;
+ }
+ }
+ }
+ else if ( handler != NULL )
+ {
+ printk(XENLOG_G_INFO "HVM%d save: %s\n",
+ d->domain_id, hvm_sr_handlers[i].name);
+
+ rc = handler(d, h);
+
+ if( rc != 0 )
{
printk(XENLOG_G_ERR
"HVM%d save: failed to save type %"PRIu16"\n",
--
2.7.4
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
^ permalink raw reply related [flat|nested] 33+ messages in thread* Re: [PATCH v14 09/11] x86/domctl: Don't pause the whole domain if only getting vcpu state
2018-07-25 12:14 ` [PATCH v14 09/11] x86/domctl: Don't pause the whole domain if only getting vcpu state Alexandru Isaila
@ 2018-07-31 13:08 ` Jan Beulich
2018-08-01 14:50 ` Isaila Alexandru
0 siblings, 1 reply; 33+ messages in thread
From: Jan Beulich @ 2018-07-31 13:08 UTC (permalink / raw)
To: aisaila; +Cc: Andrew Cooper, Paul Durrant, Wei Liu, Ian Jackson, xen-devel
>>> On 25.07.18 at 14:14, <aisaila@bitdefender.com> wrote:
> This patch is focused on moving the for loop to the caller so
> now we can save info for a single vcpu instance with the save_one
> handlers.
>
> Signed-off-by: Alexandru Isaila <aisaila@bitdefender.com>
First of all I'd appreciate if this patch was last in the series, after all
infrastructure changes have been done.
> --- a/xen/arch/x86/hvm/hvm.c
> +++ b/xen/arch/x86/hvm/hvm.c
> @@ -793,6 +793,14 @@ static int hvm_save_cpu_ctxt_one(struct vcpu *v, hvm_domain_context_t *h)
> struct segment_register seg;
> struct hvm_hw_cpu ctxt = {};
>
> + /*
> + * We don't need to save state for a vcpu that is down; the restore
> + * code will leave it down if there is nothing saved.
> + */
> + if ( v->pause_flags & VPF_down )
> + return 0;
> +
> +
> /* Architecture-specific vmcs/vmcb bits */
> hvm_funcs.save_cpu_ctxt(v, &ctxt);
>
> @@ -897,13 +905,6 @@ static int hvm_save_cpu_ctxt(struct domain *d, hvm_domain_context_t *h)
>
> for_each_vcpu ( d, v )
> {
> - /*
> - * We don't need to save state for a vcpu that is down; the restore
> - * code will leave it down if there is nothing saved.
> - */
> - if ( v->pause_flags & VPF_down )
> - continue;
Why is this not done in the respective prereq patch?
> @@ -1196,7 +1197,7 @@ static int hvm_save_cpu_xsave_states_one(struct vcpu *v, hvm_domain_context_t *h
> unsigned int size = HVM_CPU_XSAVE_SIZE(v->arch.xcr0_accum);
> int err = 0;
>
> - if ( !cpu_has_xsave )
> + if ( !cpu_has_xsave || !xsave_enabled(v) )
> return 0; /* do nothing */
>
> err = _hvm_init_entry(h, CPU_XSAVE_CODE, v->vcpu_id, size);
> @@ -1221,8 +1222,6 @@ static int hvm_save_cpu_xsave_states(struct domain *d, hvm_domain_context_t *h)
>
> for_each_vcpu ( d, v )
> {
> - if ( !xsave_enabled(v) )
> - continue;
Same here. The goal of the prereq patches is that the patch
here does not have to touch anything other than the abstract
logic.
> --- a/xen/arch/x86/hvm/save.c
> +++ b/xen/arch/x86/hvm/save.c
> @@ -138,9 +138,12 @@ size_t hvm_save_size(struct domain *d)
> int hvm_save_one(struct domain *d, unsigned int typecode, unsigned int instance,
> XEN_GUEST_HANDLE_64(uint8) handle, uint64_t *bufsz)
> {
> - int rv;
> + int rv = 0;
> hvm_domain_context_t ctxt = { };
> const struct hvm_save_descriptor *desc;
> + bool is_single_instance = false;
> + uint32_t off = 0;
> + struct vcpu *v;
>
> if ( d->is_dying ||
> typecode > HVM_SAVE_CODE_MAX ||
> @@ -148,43 +151,94 @@ int hvm_save_one(struct domain *d, unsigned int typecode, unsigned int instance,
> !hvm_sr_handlers[typecode].save )
> return -EINVAL;
>
> + if ( hvm_sr_handlers[typecode].kind == HVMSR_PER_VCPU &&
> + instance < d->max_vcpus )
> + is_single_instance = true;
It seems quite pointless to allow instance >= d->max_vcpus to
progress further. In fact at this point all HVMSR_PER_VCPU
instances ought to have a save_one() handler, and hence I'm
thinking you could get away without a is_single_instance local
variable.
> ctxt.size = hvm_sr_handlers[typecode].size;
> - if ( hvm_sr_handlers[typecode].kind == HVMSR_PER_VCPU )
> + if ( hvm_sr_handlers[typecode].kind == HVMSR_PER_VCPU &&
> + instance == d->max_vcpus )
And as a result I don't understand this addition.
I'm afraid the rest of the changes to this function would change
too much as a result, so I'm skipping them for now.
> @@ -196,7 +250,9 @@ int hvm_save(struct domain *d, hvm_domain_context_t *h)
I don't think the remaining changes actually belongs in a patch with
a title like the one here has.
> struct hvm_save_header hdr;
> struct hvm_save_end end;
> hvm_save_handler handler;
> - unsigned int i;
> + hvm_save_one_handler save_one_handler;
> + unsigned int i, rc;
rc would not normally be of unsigned type. I'm anyway struggling
to understand why you introduce the variable. You don't even log
its value in the error case(s).
> + struct vcpu *v = NULL;
Pointless initializer afaict.
> @@ -224,11 +280,32 @@ int hvm_save(struct domain *d, hvm_domain_context_t *h)
> for ( i = 0; i <= HVM_SAVE_CODE_MAX; i++ )
> {
> handler = hvm_sr_handlers[i].save;
> - if ( handler != NULL )
> + save_one_handler = hvm_sr_handlers[i].save_one;
> + if ( save_one_handler != NULL )
> {
> printk(XENLOG_G_INFO "HVM%d save: %s\n",
> d->domain_id, hvm_sr_handlers[i].name);
Please also log the vCPU ID (perhaps using "HVM %pv save: ...".
and also on the error path a few lines down), making this message
at the same time distinguishable from ...
> - if ( handler(d, h) != 0 )
> + for_each_vcpu ( d, v )
> + {
> + rc = save_one_handler(v, h);
> +
> + if( rc != 0 )
> + {
> + printk(XENLOG_G_ERR
> + "HVM%d save: failed to save type %"PRIu16"\n",
> + d->domain_id, i);
> + return -EFAULT;
> + }
> + }
> + }
> + else if ( handler != NULL )
> + {
> + printk(XENLOG_G_INFO "HVM%d save: %s\n",
> + d->domain_id, hvm_sr_handlers[i].name);
... this one and ...
> + rc = handler(d, h);
> +
> + if( rc != 0 )
> {
> printk(XENLOG_G_ERR
> "HVM%d save: failed to save type %"PRIu16"\n",
... for the error path from this one.
Jan
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
^ permalink raw reply [flat|nested] 33+ messages in thread* Re: [PATCH v14 09/11] x86/domctl: Don't pause the whole domain if only getting vcpu state
2018-07-31 13:08 ` Jan Beulich
@ 2018-08-01 14:50 ` Isaila Alexandru
2018-08-01 15:10 ` Jan Beulich
0 siblings, 1 reply; 33+ messages in thread
From: Isaila Alexandru @ 2018-08-01 14:50 UTC (permalink / raw)
To: Jan Beulich; +Cc: Andrew Cooper, Paul Durrant, Wei Liu, Ian Jackson, xen-devel
On Ma, 2018-07-31 at 07:08 -0600, Jan Beulich wrote:
> >
> > >
> > > >
> > > > On 25.07.18 at 14:14, <aisaila@bitdefender.com> wrote:
> > This patch is focused on moving the for loop to the caller so
> > now we can save info for a single vcpu instance with the save_one
> > handlers.
> >
> > Signed-off-by: Alexandru Isaila <aisaila@bitdefender.com>
> First of all I'd appreciate if this patch was last in the series,
> after all
> infrastructure changes have been done.
>
If this patch will be the last one in the series then between the one
that removes the save functions (and with that the for) and this one
the code will not run as expected because there will be no for.
This patch has to be followed by the "Remove redundant save functions"
and not the other way around.
Please clarify this if I did not understood correctly.
Thanks,
Alex
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: [PATCH v14 09/11] x86/domctl: Don't pause the whole domain if only getting vcpu state
2018-08-01 14:50 ` Isaila Alexandru
@ 2018-08-01 15:10 ` Jan Beulich
0 siblings, 0 replies; 33+ messages in thread
From: Jan Beulich @ 2018-08-01 15:10 UTC (permalink / raw)
To: aisaila; +Cc: Andrew Cooper, Paul Durrant, Wei Liu, Ian Jackson, xen-devel
>>> On 01.08.18 at 16:50, <aisaila@bitdefender.com> wrote:
> On Ma, 2018-07-31 at 07:08 -0600, Jan Beulich wrote:
>> >
>> > >
>> > > >
>> > > > On 25.07.18 at 14:14, <aisaila@bitdefender.com> wrote:
>> > This patch is focused on moving the for loop to the caller so
>> > now we can save info for a single vcpu instance with the save_one
>> > handlers.
>> >
>> > Signed-off-by: Alexandru Isaila <aisaila@bitdefender.com>
>> First of all I'd appreciate if this patch was last in the series,
>> after all
>> infrastructure changes have been done.
>>
> If this patch will be the last one in the series then between the one
> that removes the save functions (and with that the for) and this one
> the code will not run as expected because there will be no for.
Well, I didn't mean you to re-order things without adjusting the code
accordingly. You will want to transform hvm_save() first, and only as
the final step make hvm_save_one() behave the way you're actually
after. This may (as already indicated in some of the individual
responses) incur changing the splitting between patches.
Jan
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
^ permalink raw reply [flat|nested] 33+ messages in thread
* [PATCH v14 10/11] x86/hvm: Remove redundant save functions
2018-07-25 12:14 [PATCH v14 00/11] x86/domctl: Save info for one vcpu instance Alexandru Isaila
` (8 preceding siblings ...)
2018-07-25 12:14 ` [PATCH v14 09/11] x86/domctl: Don't pause the whole domain if only getting vcpu state Alexandru Isaila
@ 2018-07-25 12:14 ` Alexandru Isaila
2018-07-31 13:24 ` Jan Beulich
2018-07-25 12:14 ` [PATCH v14 11/11] x86/hvm: Remove save_one handler Alexandru Isaila
2018-07-25 12:38 ` [PATCH v14 00/11] x86/domctl: Save info for one vcpu instance Jan Beulich
11 siblings, 1 reply; 33+ messages in thread
From: Alexandru Isaila @ 2018-07-25 12:14 UTC (permalink / raw)
To: xen-devel
Cc: wei.liu2, andrew.cooper3, ian.jackson, paul.durrant, jbeulich,
Alexandru Isaila
This patch removes the redundant save functions and renames the
save_one* to save. It then changes the domain param to vcpu in the
save funcs.
Signed-off-by: Alexandru Isaila <aisaila@bitdefender.com>
---
Changes since V11:
- Remove enum return type for save funcs.
---
xen/arch/x86/cpu/mcheck/vmce.c | 19 ++---------
xen/arch/x86/hvm/hpet.c | 3 +-
xen/arch/x86/hvm/hvm.c | 75 +++++-------------------------------------
xen/arch/x86/hvm/i8254.c | 3 +-
xen/arch/x86/hvm/irq.c | 9 +++--
xen/arch/x86/hvm/mtrr.c | 19 ++---------
xen/arch/x86/hvm/pmtimer.c | 3 +-
xen/arch/x86/hvm/rtc.c | 3 +-
xen/arch/x86/hvm/save.c | 26 +++------------
xen/arch/x86/hvm/vioapic.c | 3 +-
xen/arch/x86/hvm/viridian.c | 22 +++----------
xen/arch/x86/hvm/vlapic.c | 34 ++++++-------------
xen/arch/x86/hvm/vpic.c | 3 +-
xen/include/asm-x86/hvm/save.h | 2 +-
14 files changed, 50 insertions(+), 174 deletions(-)
diff --git a/xen/arch/x86/cpu/mcheck/vmce.c b/xen/arch/x86/cpu/mcheck/vmce.c
index 35044d7..b53ad7c 100644
--- a/xen/arch/x86/cpu/mcheck/vmce.c
+++ b/xen/arch/x86/cpu/mcheck/vmce.c
@@ -349,7 +349,7 @@ int vmce_wrmsr(uint32_t msr, uint64_t val)
return ret;
}
-static int vmce_save_vcpu_ctxt_one(struct vcpu *v, hvm_domain_context_t *h)
+static int vmce_save_vcpu_ctxt(struct vcpu *v, hvm_domain_context_t *h)
{
struct hvm_vmce_vcpu ctxt = {
.caps = v->arch.vmce.mcg_cap,
@@ -361,21 +361,6 @@ static int vmce_save_vcpu_ctxt_one(struct vcpu *v, hvm_domain_context_t *h)
return hvm_save_entry(VMCE_VCPU, v->vcpu_id, h, &ctxt);
}
-static int vmce_save_vcpu_ctxt(struct domain *d, hvm_domain_context_t *h)
-{
- struct vcpu *v;
- int err = 0;
-
- for_each_vcpu ( d, v )
- {
- err = vmce_save_vcpu_ctxt_one(v, h);
- if ( err )
- break;
- }
-
- return err;
-}
-
static int vmce_load_vcpu_ctxt(struct domain *d, hvm_domain_context_t *h)
{
unsigned int vcpuid = hvm_load_instance(h);
@@ -396,7 +381,7 @@ static int vmce_load_vcpu_ctxt(struct domain *d, hvm_domain_context_t *h)
}
HVM_REGISTER_SAVE_RESTORE(VMCE_VCPU, vmce_save_vcpu_ctxt,
- vmce_save_vcpu_ctxt_one,
+ NULL,
vmce_load_vcpu_ctxt, 1, HVMSR_PER_VCPU);
/*
diff --git a/xen/arch/x86/hvm/hpet.c b/xen/arch/x86/hvm/hpet.c
index aff8613..6e7f744 100644
--- a/xen/arch/x86/hvm/hpet.c
+++ b/xen/arch/x86/hvm/hpet.c
@@ -516,8 +516,9 @@ static const struct hvm_mmio_ops hpet_mmio_ops = {
};
-static int hpet_save(struct domain *d, hvm_domain_context_t *h)
+static int hpet_save(struct vcpu *vcpu, hvm_domain_context_t *h)
{
+ struct domain *d = vcpu->domain;
HPETState *hp = domain_vhpet(d);
struct vcpu *v = pt_global_vcpu_target(d);
int rc;
diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index f140305..122552e 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -740,7 +740,7 @@ void hvm_domain_destroy(struct domain *d)
destroy_vpci_mmcfg(d);
}
-static int hvm_save_tsc_adjust_one(struct vcpu *v, hvm_domain_context_t *h)
+static int hvm_save_tsc_adjust(struct vcpu *v, hvm_domain_context_t *h)
{
struct hvm_tsc_adjust ctxt = {
.tsc_adjust = v->arch.hvm_vcpu.msr_tsc_adjust,
@@ -749,21 +749,6 @@ static int hvm_save_tsc_adjust_one(struct vcpu *v, hvm_domain_context_t *h)
return hvm_save_entry(TSC_ADJUST, v->vcpu_id, h, &ctxt);
}
-static int hvm_save_tsc_adjust(struct domain *d, hvm_domain_context_t *h)
-{
- struct vcpu *v;
- int err = 0;
-
- for_each_vcpu ( d, v )
- {
- err = hvm_save_tsc_adjust_one(v, h);
- if ( err )
- break;
- }
-
- return err;
-}
-
static int hvm_load_tsc_adjust(struct domain *d, hvm_domain_context_t *h)
{
unsigned int vcpuid = hvm_load_instance(h);
@@ -785,10 +770,10 @@ static int hvm_load_tsc_adjust(struct domain *d, hvm_domain_context_t *h)
}
HVM_REGISTER_SAVE_RESTORE(TSC_ADJUST, hvm_save_tsc_adjust,
- hvm_save_tsc_adjust_one,
+ NULL,
hvm_load_tsc_adjust, 1, HVMSR_PER_VCPU);
-static int hvm_save_cpu_ctxt_one(struct vcpu *v, hvm_domain_context_t *h)
+static int hvm_save_cpu_ctxt(struct vcpu *v, hvm_domain_context_t *h)
{
struct segment_register seg;
struct hvm_hw_cpu ctxt = {};
@@ -898,20 +883,6 @@ static int hvm_save_cpu_ctxt_one(struct vcpu *v, hvm_domain_context_t *h)
return hvm_save_entry(CPU, v->vcpu_id, h, &ctxt);
}
-static int hvm_save_cpu_ctxt(struct domain *d, hvm_domain_context_t *h)
-{
- struct vcpu *v;
- int err = 0;
-
- for_each_vcpu ( d, v )
- {
- err = hvm_save_cpu_ctxt_one(v, h);
- if ( err )
- break;
- }
- return err;
-}
-
/* Return a string indicating the error, or NULL for valid. */
const char *hvm_efer_valid(const struct vcpu *v, uint64_t value,
signed int cr0_pg)
@@ -1183,7 +1154,7 @@ static int hvm_load_cpu_ctxt(struct domain *d, hvm_domain_context_t *h)
return 0;
}
-HVM_REGISTER_SAVE_RESTORE(CPU, hvm_save_cpu_ctxt, hvm_save_cpu_ctxt_one,
+HVM_REGISTER_SAVE_RESTORE(CPU, hvm_save_cpu_ctxt, NULL,
hvm_load_cpu_ctxt,
1, HVMSR_PER_VCPU);
@@ -1191,7 +1162,7 @@ HVM_REGISTER_SAVE_RESTORE(CPU, hvm_save_cpu_ctxt, hvm_save_cpu_ctxt_one,
save_area) + \
xstate_ctxt_size(xcr0))
-static int hvm_save_cpu_xsave_states_one(struct vcpu *v, hvm_domain_context_t *h)
+static int hvm_save_cpu_xsave_states(struct vcpu *v, hvm_domain_context_t *h)
{
struct hvm_hw_cpu_xsave *ctxt;
unsigned int size = HVM_CPU_XSAVE_SIZE(v->arch.xcr0_accum);
@@ -1215,21 +1186,6 @@ static int hvm_save_cpu_xsave_states_one(struct vcpu *v, hvm_domain_context_t *h
return 0;
}
-static int hvm_save_cpu_xsave_states(struct domain *d, hvm_domain_context_t *h)
-{
- struct vcpu *v;
- int err = 0;
-
- for_each_vcpu ( d, v )
- {
- err = hvm_save_cpu_xsave_states_one(v, h);
- if ( err )
- break;
- }
-
- return err;
-}
-
/*
* Structure layout conformity checks, documenting correctness of the cast in
* the invocation of validate_xstate() below.
@@ -1367,7 +1323,7 @@ static const uint32_t msrs_to_send[] = {
};
static unsigned int __read_mostly msr_count_max = ARRAY_SIZE(msrs_to_send);
-static int hvm_save_cpu_msrs_one(struct vcpu *v, hvm_domain_context_t *h)
+static int hvm_save_cpu_msrs(struct vcpu *v, hvm_domain_context_t *h)
{
struct hvm_save_descriptor *desc = _p(&h->data[h->cur]);
struct hvm_msr *ctxt;
@@ -1428,21 +1384,6 @@ static int hvm_save_cpu_msrs_one(struct vcpu *v, hvm_domain_context_t *h)
return 0;
}
-static int hvm_save_cpu_msrs(struct domain *d, hvm_domain_context_t *h)
-{
- struct vcpu *v;
- int err = 0;
-
- for_each_vcpu ( d, v )
- {
- err = hvm_save_cpu_msrs_one(v, h);
- if ( err )
- break;
- }
-
- return err;
-}
-
static int hvm_load_cpu_msrs(struct domain *d, hvm_domain_context_t *h)
{
unsigned int i, vcpuid = hvm_load_instance(h);
@@ -1535,7 +1476,7 @@ static int __init hvm_register_CPU_save_and_restore(void)
hvm_register_savevm(CPU_XSAVE_CODE,
"CPU_XSAVE",
hvm_save_cpu_xsave_states,
- hvm_save_cpu_xsave_states_one,
+ NULL,
hvm_load_cpu_xsave_states,
HVM_CPU_XSAVE_SIZE(xfeature_mask) +
sizeof(struct hvm_save_descriptor),
@@ -1548,7 +1489,7 @@ static int __init hvm_register_CPU_save_and_restore(void)
hvm_register_savevm(CPU_MSR_CODE,
"CPU_MSR",
hvm_save_cpu_msrs,
- hvm_save_cpu_msrs_one,
+ NULL,
hvm_load_cpu_msrs,
HVM_CPU_MSR_SIZE(msr_count_max) +
sizeof(struct hvm_save_descriptor),
diff --git a/xen/arch/x86/hvm/i8254.c b/xen/arch/x86/hvm/i8254.c
index ec77b23..d51463d 100644
--- a/xen/arch/x86/hvm/i8254.c
+++ b/xen/arch/x86/hvm/i8254.c
@@ -390,8 +390,9 @@ void pit_stop_channel0_irq(PITState *pit)
spin_unlock(&pit->lock);
}
-static int pit_save(struct domain *d, hvm_domain_context_t *h)
+static int pit_save(struct vcpu *v, hvm_domain_context_t *h)
{
+ struct domain *d = v->domain;
PITState *pit = domain_vpit(d);
int rc;
diff --git a/xen/arch/x86/hvm/irq.c b/xen/arch/x86/hvm/irq.c
index 770eab7..a405e7f 100644
--- a/xen/arch/x86/hvm/irq.c
+++ b/xen/arch/x86/hvm/irq.c
@@ -630,8 +630,9 @@ static int __init dump_irq_info_key_init(void)
}
__initcall(dump_irq_info_key_init);
-static int irq_save_pci(struct domain *d, hvm_domain_context_t *h)
+static int irq_save_pci(struct vcpu *v, hvm_domain_context_t *h)
{
+ struct domain *d = v->domain;
struct hvm_irq *hvm_irq = hvm_domain_irq(d);
unsigned int asserted, pdev, pintx;
int rc;
@@ -662,16 +663,18 @@ static int irq_save_pci(struct domain *d, hvm_domain_context_t *h)
return rc;
}
-static int irq_save_isa(struct domain *d, hvm_domain_context_t *h)
+static int irq_save_isa(struct vcpu *v, hvm_domain_context_t *h)
{
+ struct domain *d = v->domain;
struct hvm_irq *hvm_irq = hvm_domain_irq(d);
/* Save ISA IRQ lines */
return ( hvm_save_entry(ISA_IRQ, 0, h, &hvm_irq->isa_irq) );
}
-static int irq_save_link(struct domain *d, hvm_domain_context_t *h)
+static int irq_save_link(struct vcpu *v, hvm_domain_context_t *h)
{
+ struct domain *d = v->domain;
struct hvm_irq *hvm_irq = hvm_domain_irq(d);
/* Save PCI-ISA link state */
diff --git a/xen/arch/x86/hvm/mtrr.c b/xen/arch/x86/hvm/mtrr.c
index 1cb2a2e..b7c9bd6 100644
--- a/xen/arch/x86/hvm/mtrr.c
+++ b/xen/arch/x86/hvm/mtrr.c
@@ -718,7 +718,7 @@ int hvm_set_mem_pinned_cacheattr(struct domain *d, uint64_t gfn_start,
return 0;
}
-static int hvm_save_mtrr_msr_one(struct vcpu *v, hvm_domain_context_t *h)
+static int hvm_save_mtrr_msr(struct vcpu *v, hvm_domain_context_t *h)
{
const struct mtrr_state *mtrr_state = &v->arch.hvm_vcpu.mtrr;
struct hvm_hw_mtrr hw_mtrr = {
@@ -758,21 +758,6 @@ static int hvm_save_mtrr_msr_one(struct vcpu *v, hvm_domain_context_t *h)
return hvm_save_entry(MTRR, v->vcpu_id, h, &hw_mtrr);
}
-static int hvm_save_mtrr_msr(struct domain *d, hvm_domain_context_t *h)
-{
- struct vcpu *v;
- int err = 0;
-
- /* save mtrr&pat */
- for_each_vcpu(d, v)
- {
- err = hvm_save_mtrr_msr_one(v, h);
- if ( err )
- break;
- }
- return err;
-}
-
static int hvm_load_mtrr_msr(struct domain *d, hvm_domain_context_t *h)
{
int vcpuid, i;
@@ -823,7 +808,7 @@ static int hvm_load_mtrr_msr(struct domain *d, hvm_domain_context_t *h)
return 0;
}
-HVM_REGISTER_SAVE_RESTORE(MTRR, hvm_save_mtrr_msr, hvm_save_mtrr_msr_one,
+HVM_REGISTER_SAVE_RESTORE(MTRR, hvm_save_mtrr_msr, NULL,
hvm_load_mtrr_msr, 1, HVMSR_PER_VCPU);
void memory_type_changed(struct domain *d)
diff --git a/xen/arch/x86/hvm/pmtimer.c b/xen/arch/x86/hvm/pmtimer.c
index 0a5e8ce..461d2df 100644
--- a/xen/arch/x86/hvm/pmtimer.c
+++ b/xen/arch/x86/hvm/pmtimer.c
@@ -249,8 +249,9 @@ static int handle_pmt_io(
return X86EMUL_OKAY;
}
-static int acpi_save(struct domain *d, hvm_domain_context_t *h)
+static int acpi_save(struct vcpu *v, hvm_domain_context_t *h)
{
+ struct domain *d = v->domain;
struct hvm_hw_acpi *acpi = &d->arch.hvm_domain.acpi;
PMTState *s = &d->arch.hvm_domain.pl_time->vpmt;
uint32_t x, msb = acpi->tmr_val & TMR_VAL_MSB;
diff --git a/xen/arch/x86/hvm/rtc.c b/xen/arch/x86/hvm/rtc.c
index ce7e71b..ea2fbd3 100644
--- a/xen/arch/x86/hvm/rtc.c
+++ b/xen/arch/x86/hvm/rtc.c
@@ -737,8 +737,9 @@ void rtc_migrate_timers(struct vcpu *v)
}
/* Save RTC hardware state */
-static int rtc_save(struct domain *d, hvm_domain_context_t *h)
+static int rtc_save(struct vcpu *v, hvm_domain_context_t *h)
{
+ struct domain *d = v->domain;
RTCState *s = domain_vrtc(d);
int rc;
diff --git a/xen/arch/x86/hvm/save.c b/xen/arch/x86/hvm/save.c
index d57648d..c579e49 100644
--- a/xen/arch/x86/hvm/save.c
+++ b/xen/arch/x86/hvm/save.c
@@ -174,7 +174,7 @@ int hvm_save_one(struct domain *d, unsigned int typecode, unsigned int instance,
rv = hvm_sr_handlers[typecode].save_one(d->vcpu[instance],
&ctxt);
else
- rv = hvm_sr_handlers[typecode].save(d, &ctxt);
+ rv = hvm_sr_handlers[typecode].save(d->vcpu[instance], &ctxt);
if ( rv != 0 )
{
@@ -207,7 +207,8 @@ int hvm_save_one(struct domain *d, unsigned int typecode, unsigned int instance,
{
for_each_vcpu ( d, v )
{
- if ( (rv = hvm_sr_handlers[typecode].save(d, &ctxt)) != 0 )
+ if ( (rv = hvm_sr_handlers[typecode].save(d->vcpu[instance],
+ &ctxt)) != 0 )
{
printk(XENLOG_G_ERR "HVM%d save: failed to save type %"PRIu16" (%d)\n",
d->domain_id, typecode, rv);
@@ -250,7 +251,6 @@ int hvm_save(struct domain *d, hvm_domain_context_t *h)
struct hvm_save_header hdr;
struct hvm_save_end end;
hvm_save_handler handler;
- hvm_save_one_handler save_one_handler;
unsigned int i, rc;
struct vcpu *v = NULL;
@@ -280,14 +280,13 @@ int hvm_save(struct domain *d, hvm_domain_context_t *h)
for ( i = 0; i <= HVM_SAVE_CODE_MAX; i++ )
{
handler = hvm_sr_handlers[i].save;
- save_one_handler = hvm_sr_handlers[i].save_one;
- if ( save_one_handler != NULL )
+ if ( handler != NULL )
{
printk(XENLOG_G_INFO "HVM%d save: %s\n",
d->domain_id, hvm_sr_handlers[i].name);
for_each_vcpu ( d, v )
{
- rc = save_one_handler(v, h);
+ rc = handler(v, h);
if( rc != 0 )
{
@@ -298,21 +297,6 @@ int hvm_save(struct domain *d, hvm_domain_context_t *h)
}
}
}
- else if ( handler != NULL )
- {
- printk(XENLOG_G_INFO "HVM%d save: %s\n",
- d->domain_id, hvm_sr_handlers[i].name);
-
- rc = handler(d, h);
-
- if( rc != 0 )
- {
- printk(XENLOG_G_ERR
- "HVM%d save: failed to save type %"PRIu16"\n",
- d->domain_id, i);
- return -EFAULT;
- }
- }
}
/* Save an end-of-file marker */
diff --git a/xen/arch/x86/hvm/vioapic.c b/xen/arch/x86/hvm/vioapic.c
index 66f54e4..cec4b1b 100644
--- a/xen/arch/x86/hvm/vioapic.c
+++ b/xen/arch/x86/hvm/vioapic.c
@@ -569,8 +569,9 @@ int vioapic_get_trigger_mode(const struct domain *d, unsigned int gsi)
return vioapic->redirtbl[pin].fields.trig_mode;
}
-static int ioapic_save(struct domain *d, hvm_domain_context_t *h)
+static int ioapic_save(struct vcpu *v, hvm_domain_context_t *h)
{
+ struct domain *d = v->domain;
struct hvm_vioapic *s;
if ( !has_vioapic(d) )
diff --git a/xen/arch/x86/hvm/viridian.c b/xen/arch/x86/hvm/viridian.c
index f72c8a9..1991829 100644
--- a/xen/arch/x86/hvm/viridian.c
+++ b/xen/arch/x86/hvm/viridian.c
@@ -990,8 +990,9 @@ out:
return HVM_HCALL_completed;
}
-static int viridian_save_domain_ctxt(struct domain *d, hvm_domain_context_t *h)
+static int viridian_save_domain_ctxt(struct vcpu *v, hvm_domain_context_t *h)
{
+ struct domain *d = v->domain;
struct hvm_viridian_domain_context ctxt = {
.time_ref_count = d->arch.hvm_domain.viridian.time_ref_count.val,
.hypercall_gpa = d->arch.hvm_domain.viridian.hypercall_gpa.raw,
@@ -1026,7 +1027,7 @@ static int viridian_load_domain_ctxt(struct domain *d, hvm_domain_context_t *h)
HVM_REGISTER_SAVE_RESTORE(VIRIDIAN_DOMAIN, viridian_save_domain_ctxt, NULL,
viridian_load_domain_ctxt, 1, HVMSR_PER_DOM);
-static int viridian_save_vcpu_ctxt_one(struct vcpu *v, hvm_domain_context_t *h)
+static int viridian_save_vcpu_ctxt(struct vcpu *v, hvm_domain_context_t *h)
{
struct hvm_viridian_vcpu_context ctxt = {};
@@ -1039,21 +1040,6 @@ static int viridian_save_vcpu_ctxt_one(struct vcpu *v, hvm_domain_context_t *h)
return hvm_save_entry(VIRIDIAN_VCPU, v->vcpu_id, h, &ctxt);
}
-static int viridian_save_vcpu_ctxt(struct domain *d, hvm_domain_context_t *h)
-{
- struct vcpu *v;
- int err = 0;
-
- for_each_vcpu ( d, v )
- {
- err = viridian_save_vcpu_ctxt_one(v, h);
- if ( err )
- break;
- }
-
- return err;
-}
-
static int viridian_load_vcpu_ctxt(struct domain *d, hvm_domain_context_t *h)
{
int vcpuid;
@@ -1085,7 +1071,7 @@ static int viridian_load_vcpu_ctxt(struct domain *d, hvm_domain_context_t *h)
}
HVM_REGISTER_SAVE_RESTORE(VIRIDIAN_VCPU, viridian_save_vcpu_ctxt,
- viridian_save_vcpu_ctxt_one,
+ NULL,
viridian_load_vcpu_ctxt, 1, HVMSR_PER_VCPU);
static int __init parse_viridian_version(const char *arg)
diff --git a/xen/arch/x86/hvm/vlapic.c b/xen/arch/x86/hvm/vlapic.c
index eff6070..e51dff7 100644
--- a/xen/arch/x86/hvm/vlapic.c
+++ b/xen/arch/x86/hvm/vlapic.c
@@ -1435,45 +1435,31 @@ static void lapic_rearm(struct vlapic *s)
s->timer_last_update = s->pt.last_plt_gtime;
}
-static int lapic_save_hidden(struct domain *d, hvm_domain_context_t *h)
+static int lapic_save_hidden(struct vcpu *v, hvm_domain_context_t *h)
{
- struct vcpu *v;
+ struct domain *d = v->domain;
struct vlapic *s;
- int rc = 0;
if ( !has_vlapic(d) )
return 0;
- for_each_vcpu ( d, v )
- {
- s = vcpu_vlapic(v);
- if ( (rc = hvm_save_entry(LAPIC, v->vcpu_id, h, &s->hw)) != 0 )
- break;
- }
-
- return rc;
+ s = vcpu_vlapic(v);
+ return hvm_save_entry(LAPIC, v->vcpu_id, h, &s->hw);
}
-static int lapic_save_regs(struct domain *d, hvm_domain_context_t *h)
+static int lapic_save_regs(struct vcpu *v, hvm_domain_context_t *h)
{
- struct vcpu *v;
+ struct domain *d = v->domain;
struct vlapic *s;
- int rc = 0;
if ( !has_vlapic(d) )
return 0;
- for_each_vcpu ( d, v )
- {
- if ( hvm_funcs.sync_pir_to_irr )
- hvm_funcs.sync_pir_to_irr(v);
-
- s = vcpu_vlapic(v);
- if ( (rc = hvm_save_entry(LAPIC_REGS, v->vcpu_id, h, s->regs)) != 0 )
- break;
- }
+ if ( hvm_funcs.sync_pir_to_irr )
+ hvm_funcs.sync_pir_to_irr(v);
- return rc;
+ s = vcpu_vlapic(v);
+ return hvm_save_entry(LAPIC_REGS, v->vcpu_id, h, s->regs);
}
/*
diff --git a/xen/arch/x86/hvm/vpic.c b/xen/arch/x86/hvm/vpic.c
index ca9b4cb..e03d8cf 100644
--- a/xen/arch/x86/hvm/vpic.c
+++ b/xen/arch/x86/hvm/vpic.c
@@ -371,8 +371,9 @@ static int vpic_intercept_elcr_io(
return X86EMUL_OKAY;
}
-static int vpic_save(struct domain *d, hvm_domain_context_t *h)
+static int vpic_save(struct vcpu *v, hvm_domain_context_t *h)
{
+ struct domain *d = v->domain;
struct hvm_hw_vpic *s;
int i;
diff --git a/xen/include/asm-x86/hvm/save.h b/xen/include/asm-x86/hvm/save.h
index 2538628..e64b4df 100644
--- a/xen/include/asm-x86/hvm/save.h
+++ b/xen/include/asm-x86/hvm/save.h
@@ -95,7 +95,7 @@ static inline uint16_t hvm_load_instance(struct hvm_domain_context *h)
* The save handler may save multiple instances of a type into the buffer;
* the load handler will be called once for each instance found when
* restoring. Both return non-zero on error. */
-typedef int (*hvm_save_handler) (struct domain *d,
+typedef int (*hvm_save_handler) (struct vcpu *v,
hvm_domain_context_t *h);
typedef int (*hvm_save_one_handler)(struct vcpu *v,
hvm_domain_context_t *h);
--
2.7.4
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
^ permalink raw reply related [flat|nested] 33+ messages in thread* Re: [PATCH v14 10/11] x86/hvm: Remove redundant save functions
2018-07-25 12:14 ` [PATCH v14 10/11] x86/hvm: Remove redundant save functions Alexandru Isaila
@ 2018-07-31 13:24 ` Jan Beulich
2018-07-31 13:32 ` Isaila Alexandru
0 siblings, 1 reply; 33+ messages in thread
From: Jan Beulich @ 2018-07-31 13:24 UTC (permalink / raw)
To: aisaila; +Cc: Andrew Cooper, Paul Durrant, Wei Liu, Ian Jackson, xen-devel
>>> On 25.07.18 at 14:14, <aisaila@bitdefender.com> wrote:
> --- a/xen/arch/x86/hvm/hpet.c
> +++ b/xen/arch/x86/hvm/hpet.c
> @@ -516,8 +516,9 @@ static const struct hvm_mmio_ops hpet_mmio_ops = {
> };
>
>
> -static int hpet_save(struct domain *d, hvm_domain_context_t *h)
> +static int hpet_save(struct vcpu *vcpu, hvm_domain_context_t *h)
Consistently v please.
> @@ -785,10 +770,10 @@ static int hvm_load_tsc_adjust(struct domain *d, hvm_domain_context_t *h)
> }
>
> HVM_REGISTER_SAVE_RESTORE(TSC_ADJUST, hvm_save_tsc_adjust,
> - hvm_save_tsc_adjust_one,
> + NULL,
> hvm_load_tsc_adjust, 1, HVMSR_PER_VCPU);
I think there's still an ordering issue with this last parts of series: You
shouldn't need to re-introduce NULL here. The abstract logic should
first be switched to no longer use the .save handlers, at which point
you can simply drop the field at the same time as you rename the
functions.
> --- a/xen/arch/x86/hvm/save.c
> +++ b/xen/arch/x86/hvm/save.c
> @@ -174,7 +174,7 @@ int hvm_save_one(struct domain *d, unsigned int typecode, unsigned int instance,
> rv = hvm_sr_handlers[typecode].save_one(d->vcpu[instance],
> &ctxt);
> else
> - rv = hvm_sr_handlers[typecode].save(d, &ctxt);
> + rv = hvm_sr_handlers[typecode].save(d->vcpu[instance], &ctxt);
So this sits on the is_single_instance path and hence instance has
been bounds checked.
> @@ -207,7 +207,8 @@ int hvm_save_one(struct domain *d, unsigned int typecode, unsigned int instance,
> {
> for_each_vcpu ( d, v )
> {
> - if ( (rv = hvm_sr_handlers[typecode].save(d, &ctxt)) != 0 )
> + if ( (rv = hvm_sr_handlers[typecode].save(d->vcpu[instance],
> + &ctxt)) != 0 )
But aren't potentially accessing the array beyond bounds here?
Why don't you simply pass v? Otoh - why is this in a for_each_vcpu()
loop? Anyway, as said, I think the previous patch needs quite a bit
of work in this area. As a separate remark though - please make sure
your series can be applied in multiple steps, i.e. it needs to not break
things at any patch boundary.
> @@ -280,14 +280,13 @@ int hvm_save(struct domain *d, hvm_domain_context_t *h)
> for ( i = 0; i <= HVM_SAVE_CODE_MAX; i++ )
> {
> handler = hvm_sr_handlers[i].save;
> - save_one_handler = hvm_sr_handlers[i].save_one;
> - if ( save_one_handler != NULL )
> + if ( handler != NULL )
> {
> printk(XENLOG_G_INFO "HVM%d save: %s\n",
> d->domain_id, hvm_sr_handlers[i].name);
> for_each_vcpu ( d, v )
> {
> - rc = save_one_handler(v, h);
> + rc = handler(v, h);
Aren't you invoking HVMSR_PER_DOM handlers now once per vCPU
instead of once per domain?
> --- a/xen/include/asm-x86/hvm/save.h
> +++ b/xen/include/asm-x86/hvm/save.h
> @@ -95,7 +95,7 @@ static inline uint16_t hvm_load_instance(struct
> hvm_domain_context *h)
> * The save handler may save multiple instances of a type into the buffer;
> * the load handler will be called once for each instance found when
> * restoring. Both return non-zero on error. */
> -typedef int (*hvm_save_handler) (struct domain *d,
> +typedef int (*hvm_save_handler) (struct vcpu *v,
Too many spaces after struct.
Jan
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
^ permalink raw reply [flat|nested] 33+ messages in thread* Re: [PATCH v14 10/11] x86/hvm: Remove redundant save functions
2018-07-31 13:24 ` Jan Beulich
@ 2018-07-31 13:32 ` Isaila Alexandru
2018-07-31 14:26 ` Jan Beulich
0 siblings, 1 reply; 33+ messages in thread
From: Isaila Alexandru @ 2018-07-31 13:32 UTC (permalink / raw)
To: Jan Beulich; +Cc: Andrew Cooper, Paul Durrant, Wei Liu, Ian Jackson, xen-devel
On Ma, 2018-07-31 at 07:24 -0600, Jan Beulich wrote:
> >
> > >
> > > >
> > > > On 25.07.18 at 14:14, <aisaila@bitdefender.com> wrote:
> > --- a/xen/arch/x86/hvm/hpet.c
> > +++ b/xen/arch/x86/hvm/hpet.c
> > @@ -516,8 +516,9 @@ static const struct hvm_mmio_ops hpet_mmio_ops
> > = {
> > };
> >
> >
> > -static int hpet_save(struct domain *d, hvm_domain_context_t *h)
> > +static int hpet_save(struct vcpu *vcpu, hvm_domain_context_t *h)
> Consistently v please.
>
Here the hpet_save() works with a local defined struct vcpu* v so I
have struct vcpu *vcpu in order to change as little code as possible.
If the v needs to be constant then I will change it in the next
version.
Thanks,
Alex
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
^ permalink raw reply [flat|nested] 33+ messages in thread* Re: [PATCH v14 10/11] x86/hvm: Remove redundant save functions
2018-07-31 13:32 ` Isaila Alexandru
@ 2018-07-31 14:26 ` Jan Beulich
0 siblings, 0 replies; 33+ messages in thread
From: Jan Beulich @ 2018-07-31 14:26 UTC (permalink / raw)
To: aisaila; +Cc: Andrew Cooper, Paul Durrant, Wei Liu, Ian Jackson, xen-devel
>>> On 31.07.18 at 15:32, <aisaila@bitdefender.com> wrote:
> On Ma, 2018-07-31 at 07:24 -0600, Jan Beulich wrote:
>> > > > On 25.07.18 at 14:14, <aisaila@bitdefender.com> wrote:
>> > --- a/xen/arch/x86/hvm/hpet.c
>> > +++ b/xen/arch/x86/hvm/hpet.c
>> > @@ -516,8 +516,9 @@ static const struct hvm_mmio_ops hpet_mmio_ops
>> > = {
>> > };
>> >
>> >
>> > -static int hpet_save(struct domain *d, hvm_domain_context_t *h)
>> > +static int hpet_save(struct vcpu *vcpu, hvm_domain_context_t *h)
>> Consistently v please.
>>
> Here the hpet_save() works with a local defined struct vcpu* v so I
> have struct vcpu *vcpu in order to change as little code as possible.
> If the v needs to be constant then I will change it in the next
> version.
Yes please, but there's not a whole lot to change: Simply drop the
local variable declaration of v and make its initializer a normal
statement (as the function doesn't further care about the passed
in v).
Jan
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
^ permalink raw reply [flat|nested] 33+ messages in thread
* [PATCH v14 11/11] x86/hvm: Remove save_one handler
2018-07-25 12:14 [PATCH v14 00/11] x86/domctl: Save info for one vcpu instance Alexandru Isaila
` (9 preceding siblings ...)
2018-07-25 12:14 ` [PATCH v14 10/11] x86/hvm: Remove redundant save functions Alexandru Isaila
@ 2018-07-25 12:14 ` Alexandru Isaila
2018-07-25 12:38 ` [PATCH v14 00/11] x86/domctl: Save info for one vcpu instance Jan Beulich
11 siblings, 0 replies; 33+ messages in thread
From: Alexandru Isaila @ 2018-07-25 12:14 UTC (permalink / raw)
To: xen-devel
Cc: wei.liu2, andrew.cooper3, ian.jackson, paul.durrant, jbeulich,
Alexandru Isaila
Signed-off-by: Alexandru Isaila <aisaila@bitdefender.com>
---
xen/arch/x86/cpu/mcheck/vmce.c | 1 -
xen/arch/x86/hvm/hpet.c | 2 +-
xen/arch/x86/hvm/hvm.c | 5 +----
xen/arch/x86/hvm/i8254.c | 2 +-
xen/arch/x86/hvm/irq.c | 6 +++---
xen/arch/x86/hvm/mtrr.c | 2 +-
xen/arch/x86/hvm/pmtimer.c | 2 +-
xen/arch/x86/hvm/rtc.c | 2 +-
xen/arch/x86/hvm/save.c | 14 +++-----------
xen/arch/x86/hvm/vioapic.c | 2 +-
xen/arch/x86/hvm/viridian.c | 3 +--
xen/arch/x86/hvm/vlapic.c | 4 ++--
xen/arch/x86/hvm/vpic.c | 2 +-
xen/include/asm-x86/hvm/save.h | 6 +-----
14 files changed, 18 insertions(+), 35 deletions(-)
diff --git a/xen/arch/x86/cpu/mcheck/vmce.c b/xen/arch/x86/cpu/mcheck/vmce.c
index b53ad7c..763d56b 100644
--- a/xen/arch/x86/cpu/mcheck/vmce.c
+++ b/xen/arch/x86/cpu/mcheck/vmce.c
@@ -381,7 +381,6 @@ static int vmce_load_vcpu_ctxt(struct domain *d, hvm_domain_context_t *h)
}
HVM_REGISTER_SAVE_RESTORE(VMCE_VCPU, vmce_save_vcpu_ctxt,
- NULL,
vmce_load_vcpu_ctxt, 1, HVMSR_PER_VCPU);
/*
diff --git a/xen/arch/x86/hvm/hpet.c b/xen/arch/x86/hvm/hpet.c
index 6e7f744..3ed6547 100644
--- a/xen/arch/x86/hvm/hpet.c
+++ b/xen/arch/x86/hvm/hpet.c
@@ -641,7 +641,7 @@ static int hpet_load(struct domain *d, hvm_domain_context_t *h)
return 0;
}
-HVM_REGISTER_SAVE_RESTORE(HPET, hpet_save, NULL, hpet_load, 1, HVMSR_PER_DOM);
+HVM_REGISTER_SAVE_RESTORE(HPET, hpet_save, hpet_load, 1, HVMSR_PER_DOM);
static void hpet_set(HPETState *h)
{
diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 122552e..6f1dbd8 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -770,7 +770,6 @@ static int hvm_load_tsc_adjust(struct domain *d, hvm_domain_context_t *h)
}
HVM_REGISTER_SAVE_RESTORE(TSC_ADJUST, hvm_save_tsc_adjust,
- NULL,
hvm_load_tsc_adjust, 1, HVMSR_PER_VCPU);
static int hvm_save_cpu_ctxt(struct vcpu *v, hvm_domain_context_t *h)
@@ -1154,7 +1153,7 @@ static int hvm_load_cpu_ctxt(struct domain *d, hvm_domain_context_t *h)
return 0;
}
-HVM_REGISTER_SAVE_RESTORE(CPU, hvm_save_cpu_ctxt, NULL,
+HVM_REGISTER_SAVE_RESTORE(CPU, hvm_save_cpu_ctxt,
hvm_load_cpu_ctxt,
1, HVMSR_PER_VCPU);
@@ -1476,7 +1475,6 @@ static int __init hvm_register_CPU_save_and_restore(void)
hvm_register_savevm(CPU_XSAVE_CODE,
"CPU_XSAVE",
hvm_save_cpu_xsave_states,
- NULL,
hvm_load_cpu_xsave_states,
HVM_CPU_XSAVE_SIZE(xfeature_mask) +
sizeof(struct hvm_save_descriptor),
@@ -1489,7 +1487,6 @@ static int __init hvm_register_CPU_save_and_restore(void)
hvm_register_savevm(CPU_MSR_CODE,
"CPU_MSR",
hvm_save_cpu_msrs,
- NULL,
hvm_load_cpu_msrs,
HVM_CPU_MSR_SIZE(msr_count_max) +
sizeof(struct hvm_save_descriptor),
diff --git a/xen/arch/x86/hvm/i8254.c b/xen/arch/x86/hvm/i8254.c
index d51463d..e0d2255 100644
--- a/xen/arch/x86/hvm/i8254.c
+++ b/xen/arch/x86/hvm/i8254.c
@@ -438,7 +438,7 @@ static int pit_load(struct domain *d, hvm_domain_context_t *h)
return 0;
}
-HVM_REGISTER_SAVE_RESTORE(PIT, pit_save, NULL, pit_load, 1, HVMSR_PER_DOM);
+HVM_REGISTER_SAVE_RESTORE(PIT, pit_save, pit_load, 1, HVMSR_PER_DOM);
void pit_reset(struct domain *d)
{
diff --git a/xen/arch/x86/hvm/irq.c b/xen/arch/x86/hvm/irq.c
index a405e7f..b37275c 100644
--- a/xen/arch/x86/hvm/irq.c
+++ b/xen/arch/x86/hvm/irq.c
@@ -767,9 +767,9 @@ static int irq_load_link(struct domain *d, hvm_domain_context_t *h)
return 0;
}
-HVM_REGISTER_SAVE_RESTORE(PCI_IRQ, irq_save_pci, NULL, irq_load_pci,
+HVM_REGISTER_SAVE_RESTORE(PCI_IRQ, irq_save_pci, irq_load_pci,
1, HVMSR_PER_DOM);
-HVM_REGISTER_SAVE_RESTORE(ISA_IRQ, irq_save_isa, NULL, irq_load_isa,
+HVM_REGISTER_SAVE_RESTORE(ISA_IRQ, irq_save_isa, irq_load_isa,
1, HVMSR_PER_DOM);
-HVM_REGISTER_SAVE_RESTORE(PCI_LINK, irq_save_link, NULL, irq_load_link,
+HVM_REGISTER_SAVE_RESTORE(PCI_LINK, irq_save_link, irq_load_link,
1, HVMSR_PER_DOM);
diff --git a/xen/arch/x86/hvm/mtrr.c b/xen/arch/x86/hvm/mtrr.c
index b7c9bd6..d9a4532 100644
--- a/xen/arch/x86/hvm/mtrr.c
+++ b/xen/arch/x86/hvm/mtrr.c
@@ -808,7 +808,7 @@ static int hvm_load_mtrr_msr(struct domain *d, hvm_domain_context_t *h)
return 0;
}
-HVM_REGISTER_SAVE_RESTORE(MTRR, hvm_save_mtrr_msr, NULL,
+HVM_REGISTER_SAVE_RESTORE(MTRR, hvm_save_mtrr_msr,
hvm_load_mtrr_msr, 1, HVMSR_PER_VCPU);
void memory_type_changed(struct domain *d)
diff --git a/xen/arch/x86/hvm/pmtimer.c b/xen/arch/x86/hvm/pmtimer.c
index 461d2df..d8dcbc2 100644
--- a/xen/arch/x86/hvm/pmtimer.c
+++ b/xen/arch/x86/hvm/pmtimer.c
@@ -310,7 +310,7 @@ static int acpi_load(struct domain *d, hvm_domain_context_t *h)
return 0;
}
-HVM_REGISTER_SAVE_RESTORE(PMTIMER, acpi_save, NULL, acpi_load,
+HVM_REGISTER_SAVE_RESTORE(PMTIMER, acpi_save, acpi_load,
1, HVMSR_PER_DOM);
int pmtimer_change_ioport(struct domain *d, unsigned int version)
diff --git a/xen/arch/x86/hvm/rtc.c b/xen/arch/x86/hvm/rtc.c
index ea2fbd3..58b70fc 100644
--- a/xen/arch/x86/hvm/rtc.c
+++ b/xen/arch/x86/hvm/rtc.c
@@ -784,7 +784,7 @@ static int rtc_load(struct domain *d, hvm_domain_context_t *h)
return 0;
}
-HVM_REGISTER_SAVE_RESTORE(RTC, rtc_save, NULL, rtc_load, 1, HVMSR_PER_DOM);
+HVM_REGISTER_SAVE_RESTORE(RTC, rtc_save, rtc_load, 1, HVMSR_PER_DOM);
void rtc_reset(struct domain *d)
{
diff --git a/xen/arch/x86/hvm/save.c b/xen/arch/x86/hvm/save.c
index c579e49..01e3cbe 100644
--- a/xen/arch/x86/hvm/save.c
+++ b/xen/arch/x86/hvm/save.c
@@ -85,18 +85,16 @@ int arch_hvm_load(struct domain *d, struct hvm_save_header *hdr)
/* List of handlers for various HVM save and restore types */
static struct {
hvm_save_handler save;
- hvm_save_one_handler save_one;
hvm_load_handler load;
const char *name;
size_t size;
int kind;
-} hvm_sr_handlers[HVM_SAVE_CODE_MAX + 1] = { {NULL, NULL, NULL, "<?>"}, };
+} hvm_sr_handlers[HVM_SAVE_CODE_MAX + 1] = { {NULL, NULL, "<?>"}, };
/* Init-time function to add entries to that list */
void __init hvm_register_savevm(uint16_t typecode,
const char *name,
hvm_save_handler save_state,
- hvm_save_one_handler save_one,
hvm_load_handler load_state,
size_t size, int kind)
{
@@ -104,7 +102,6 @@ void __init hvm_register_savevm(uint16_t typecode,
ASSERT(hvm_sr_handlers[typecode].save == NULL);
ASSERT(hvm_sr_handlers[typecode].load == NULL);
hvm_sr_handlers[typecode].save = save_state;
- hvm_sr_handlers[typecode].save_one = save_one;
hvm_sr_handlers[typecode].load = load_state;
hvm_sr_handlers[typecode].name = name;
hvm_sr_handlers[typecode].size = size;
@@ -170,13 +167,8 @@ int hvm_save_one(struct domain *d, unsigned int typecode, unsigned int instance,
if ( is_single_instance )
{
- if ( hvm_sr_handlers[typecode].save_one != NULL )
- rv = hvm_sr_handlers[typecode].save_one(d->vcpu[instance],
- &ctxt);
- else
- rv = hvm_sr_handlers[typecode].save(d->vcpu[instance], &ctxt);
-
- if ( rv != 0 )
+ if ( (rv = hvm_sr_handlers[typecode].save(d->vcpu[instance],
+ &ctxt)) != 0 )
{
printk(XENLOG_G_ERR "HVM%d save: failed to save type %"PRIu16" (%d)\n",
d->domain_id, typecode, rv);
diff --git a/xen/arch/x86/hvm/vioapic.c b/xen/arch/x86/hvm/vioapic.c
index cec4b1b..86d02cf 100644
--- a/xen/arch/x86/hvm/vioapic.c
+++ b/xen/arch/x86/hvm/vioapic.c
@@ -602,7 +602,7 @@ static int ioapic_load(struct domain *d, hvm_domain_context_t *h)
return hvm_load_entry(IOAPIC, h, &s->domU);
}
-HVM_REGISTER_SAVE_RESTORE(IOAPIC, ioapic_save, NULL, ioapic_load, 1, HVMSR_PER_DOM);
+HVM_REGISTER_SAVE_RESTORE(IOAPIC, ioapic_save, ioapic_load, 1, HVMSR_PER_DOM);
void vioapic_reset(struct domain *d)
{
diff --git a/xen/arch/x86/hvm/viridian.c b/xen/arch/x86/hvm/viridian.c
index 1991829..35616e5 100644
--- a/xen/arch/x86/hvm/viridian.c
+++ b/xen/arch/x86/hvm/viridian.c
@@ -1024,7 +1024,7 @@ static int viridian_load_domain_ctxt(struct domain *d, hvm_domain_context_t *h)
return 0;
}
-HVM_REGISTER_SAVE_RESTORE(VIRIDIAN_DOMAIN, viridian_save_domain_ctxt, NULL,
+HVM_REGISTER_SAVE_RESTORE(VIRIDIAN_DOMAIN, viridian_save_domain_ctxt,
viridian_load_domain_ctxt, 1, HVMSR_PER_DOM);
static int viridian_save_vcpu_ctxt(struct vcpu *v, hvm_domain_context_t *h)
@@ -1071,7 +1071,6 @@ static int viridian_load_vcpu_ctxt(struct domain *d, hvm_domain_context_t *h)
}
HVM_REGISTER_SAVE_RESTORE(VIRIDIAN_VCPU, viridian_save_vcpu_ctxt,
- NULL,
viridian_load_vcpu_ctxt, 1, HVMSR_PER_VCPU);
static int __init parse_viridian_version(const char *arg)
diff --git a/xen/arch/x86/hvm/vlapic.c b/xen/arch/x86/hvm/vlapic.c
index e51dff7..6337cdb 100644
--- a/xen/arch/x86/hvm/vlapic.c
+++ b/xen/arch/x86/hvm/vlapic.c
@@ -1562,9 +1562,9 @@ static int lapic_load_regs(struct domain *d, hvm_domain_context_t *h)
return 0;
}
-HVM_REGISTER_SAVE_RESTORE(LAPIC, lapic_save_hidden, NULL, lapic_load_hidden,
+HVM_REGISTER_SAVE_RESTORE(LAPIC, lapic_save_hidden, lapic_load_hidden,
1, HVMSR_PER_VCPU);
-HVM_REGISTER_SAVE_RESTORE(LAPIC_REGS, lapic_save_regs, NULL, lapic_load_regs,
+HVM_REGISTER_SAVE_RESTORE(LAPIC_REGS, lapic_save_regs, lapic_load_regs,
1, HVMSR_PER_VCPU);
int vlapic_init(struct vcpu *v)
diff --git a/xen/arch/x86/hvm/vpic.c b/xen/arch/x86/hvm/vpic.c
index e03d8cf..bad5066 100644
--- a/xen/arch/x86/hvm/vpic.c
+++ b/xen/arch/x86/hvm/vpic.c
@@ -412,7 +412,7 @@ static int vpic_load(struct domain *d, hvm_domain_context_t *h)
return 0;
}
-HVM_REGISTER_SAVE_RESTORE(PIC, vpic_save, NULL, vpic_load, 2, HVMSR_PER_DOM);
+HVM_REGISTER_SAVE_RESTORE(PIC, vpic_save, vpic_load, 2, HVMSR_PER_DOM);
void vpic_reset(struct domain *d)
{
diff --git a/xen/include/asm-x86/hvm/save.h b/xen/include/asm-x86/hvm/save.h
index e64b4df..f7d31ad 100644
--- a/xen/include/asm-x86/hvm/save.h
+++ b/xen/include/asm-x86/hvm/save.h
@@ -97,8 +97,6 @@ static inline uint16_t hvm_load_instance(struct hvm_domain_context *h)
* restoring. Both return non-zero on error. */
typedef int (*hvm_save_handler) (struct vcpu *v,
hvm_domain_context_t *h);
-typedef int (*hvm_save_one_handler)(struct vcpu *v,
- hvm_domain_context_t *h);
typedef int (*hvm_load_handler) (struct domain *d,
hvm_domain_context_t *h);
@@ -107,7 +105,6 @@ typedef int (*hvm_load_handler) (struct domain *d,
void hvm_register_savevm(uint16_t typecode,
const char *name,
hvm_save_handler save_state,
- hvm_save_one_handler save_one,
hvm_load_handler load_state,
size_t size, int kind);
@@ -117,13 +114,12 @@ void hvm_register_savevm(uint16_t typecode,
/* Syntactic sugar around that function: specify the max number of
* saves, and this calculates the size of buffer needed */
-#define HVM_REGISTER_SAVE_RESTORE(_x, _save, _save_one, _load, _num, _k) \
+#define HVM_REGISTER_SAVE_RESTORE(_x, _save, _load, _num, _k) \
static int __init __hvm_register_##_x##_save_and_restore(void) \
{ \
hvm_register_savevm(HVM_SAVE_CODE(_x), \
#_x, \
&_save, \
- _save_one, \
&_load, \
(_num) * (HVM_SAVE_LENGTH(_x) \
+ sizeof (struct hvm_save_descriptor)), \
--
2.7.4
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
^ permalink raw reply related [flat|nested] 33+ messages in thread* Re: [PATCH v14 00/11] x86/domctl: Save info for one vcpu instance
2018-07-25 12:14 [PATCH v14 00/11] x86/domctl: Save info for one vcpu instance Alexandru Isaila
` (10 preceding siblings ...)
2018-07-25 12:14 ` [PATCH v14 11/11] x86/hvm: Remove save_one handler Alexandru Isaila
@ 2018-07-25 12:38 ` Jan Beulich
11 siblings, 0 replies; 33+ messages in thread
From: Jan Beulich @ 2018-07-25 12:38 UTC (permalink / raw)
To: aisaila; +Cc: Andrew Cooper, Paul Durrant, Wei Liu, Ian Jackson, xen-devel
>>> On 25.07.18 at 14:14, <aisaila@bitdefender.com> wrote:
> This patch series addresses the ideea of saving data from a single vcpu instance.
> First it starts by adding *save_one functions, then it introduces a handler for the
> new save_one* funcs and makes use of it in the hvm_save and hvm_save_one funcs.
> The final 2 patches are used for clean up. The first one removes the save* funcs and
> renames the save_one* to save.
> The last patch removes the save_one* handler that is no longer used.
I'm sorry for the rant, but I can't resist any longer: You're of course
free to continue this way and post new versions after every individual
comment. You may be aiming for a new record of versions of a patch
series this way. It's not really making much sense to me though. I'd
really like to get through this series in its entirety, but so far you've
managed to push it down on my priority 10 or so times (between
equally important series I'm judging by time what to look at). To me
personally, all these recurring submissions cause is (a little bit of, but
it sums up) extra work (moving this to its designated mail folder plus
finding and deleting the prior version).
Jan
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
^ permalink raw reply [flat|nested] 33+ messages in thread