From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Wen Congyang <wency@cn.fujitsu.com>
Cc: Lai Jiangshan <laijs@cn.fujitsu.com>,
Jiang Yunhong <yunhong.jiang@intel.com>,
Dong Eddie <eddie.dong@intel.com>, Ye Wei <wei.ye1987@gmail.com>,
xen-devl <xen-devel@lists.xen.org>,
Hong Tao <bobby.hong@huawei.com>, Xu Yao <xuyao.xu@huawei.com>,
Shriram Rajagopalan <rshriram@cs.ubc.ca>
Subject: Re: [RFC Patch v2 01/16] xen: introduce new hypercall to reset vcpu
Date: Thu, 11 Jul 2013 10:44:05 +0100 [thread overview]
Message-ID: <51DE7E65.6080507@citrix.com> (raw)
In-Reply-To: <1373531748-12547-2-git-send-email-wency@cn.fujitsu.com>
On 11/07/13 09:35, Wen Congyang wrote:
> In colo mode, SVM is running, and it will create pagetable, use gdt...
> When we do a new checkpoint, we may need to rollback all this operations.
> This new hypercall will do this.
>
> Signed-off-by: Ye Wei <wei.ye1987@gmail.com>
> Signed-off-by: Jiang Yunhong <yunhong.jiang@intel.com>
> Signed-off-by: Wen Congyang <wency@cn.fujitsu.com>
> ---
> xen/arch/x86/domain.c | 57 +++++++++++++++++++++++++++++++++++++++++++
> xen/arch/x86/x86_64/entry.S | 4 +++
> xen/include/public/xen.h | 1 +
> 3 files changed, 62 insertions(+), 0 deletions(-)
>
> diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
> index 874742c..709f77f 100644
> --- a/xen/arch/x86/domain.c
> +++ b/xen/arch/x86/domain.c
> @@ -1930,6 +1930,63 @@ int domain_relinquish_resources(struct domain *d)
> return 0;
> }
>
> +int do_reset_vcpu_op(unsigned long domid)
> +{
> + struct vcpu *v;
> + struct domain *d;
> + int ret;
> +
> + if ( domid == DOMID_SELF )
> + /* We can't destroy outself pagetables */
"We can't destroy our own pagetables"
> + return -EINVAL;
> +
> + if ( (d = rcu_lock_domain_by_id(domid)) == NULL )
> + return -EINVAL;
> +
> + BUG_ON(!cpumask_empty(d->domain_dirty_cpumask));
This looks bogus. What guarantee is there (other than the toolstack
issuing appropriate hypercalls in an appropriate order) that this is
actually true.
> + domain_pause(d);
> +
> + if ( d->arch.relmem == RELMEM_not_started )
> + {
> + for_each_vcpu ( d, v )
> + {
> + /* Drop the in-use references to page-table bases. */
> + ret = vcpu_destroy_pagetables(v);
> + if ( ret )
> + return ret;
> +
> + unmap_vcpu_info(v);
> + v->is_initialised = 0;
> + }
> +
> + if ( !is_hvm_domain(d) )
> + {
> + for_each_vcpu ( d, v )
> + {
> + /*
> + * Relinquish GDT mappings. No need for explicit unmapping of the
> + * LDT as it automatically gets squashed with the guest mappings.
> + */
> + destroy_gdt(v);
> + }
> +
> + if ( d->arch.pv_domain.pirq_eoi_map != NULL )
> + {
> + unmap_domain_page_global(d->arch.pv_domain.pirq_eoi_map);
> + put_page_and_type(
> + mfn_to_page(d->arch.pv_domain.pirq_eoi_map_mfn));
> + d->arch.pv_domain.pirq_eoi_map = NULL;
> + d->arch.pv_domain.auto_unmask = 0;
> + }
> + }
> + }
> +
> + domain_unpause(d);
> + rcu_unlock_domain(d);
> +
> + return 0;
> +}
> +
> void arch_dump_domain_info(struct domain *d)
> {
> paging_dump_domain_info(d);
> diff --git a/xen/arch/x86/x86_64/entry.S b/xen/arch/x86/x86_64/entry.S
> index 5beeccb..0e4dde4 100644
> --- a/xen/arch/x86/x86_64/entry.S
> +++ b/xen/arch/x86/x86_64/entry.S
> @@ -762,6 +762,8 @@ ENTRY(hypercall_table)
> .quad do_domctl
> .quad do_kexec_op
> .quad do_tmem_op
> + .quad do_ni_hypercall /* reserved for XenClient */
> + .quad do_reset_vcpu_op /* 40 */
> .rept __HYPERVISOR_arch_0-((.-hypercall_table)/8)
> .quad do_ni_hypercall
> .endr
> @@ -810,6 +812,8 @@ ENTRY(hypercall_args_table)
> .byte 1 /* do_domctl */
> .byte 2 /* do_kexec */
> .byte 1 /* do_tmem_op */
> + .byte 0 /* do_ni_hypercall */
> + .byte 1 /* do_reset_vcpu_op */ /* 40 */
> .rept __HYPERVISOR_arch_0-(.-hypercall_args_table)
> .byte 0 /* do_ni_hypercall */
> .endr
> diff --git a/xen/include/public/xen.h b/xen/include/public/xen.h
> index 3cab74f..696f4a3 100644
> --- a/xen/include/public/xen.h
> +++ b/xen/include/public/xen.h
> @@ -101,6 +101,7 @@ DEFINE_XEN_GUEST_HANDLE(xen_ulong_t);
> #define __HYPERVISOR_kexec_op 37
> #define __HYPERVISOR_tmem_op 38
> #define __HYPERVISOR_xc_reserved_op 39 /* reserved for XenClient */
> +#define __HYPERVISOR_reset_vcpu_op 40
Why can this not be a domctl subop ?
~Andrew
>
> /* Architecture-specific hypercall definitions. */
> #define __HYPERVISOR_arch_0 48
next prev parent reply other threads:[~2013-07-11 9:44 UTC|newest]
Thread overview: 30+ messages / expand[flat|nested] mbox.gz Atom feed top
2013-07-11 8:35 [RFC Patch v2 00/16] COarse-grain LOck-stepping Virtual Machines for Non-stop Service Wen Congyang
2013-07-11 8:35 ` [RFC Patch v2 01/16] xen: introduce new hypercall to reset vcpu Wen Congyang
2013-07-11 9:44 ` Andrew Cooper [this message]
2013-07-11 9:58 ` Wen Congyang
2013-07-11 10:01 ` Ian Campbell
2013-08-01 11:48 ` Tim Deegan
2013-08-06 6:47 ` Wen Congyang
2013-07-11 8:35 ` [RFC Patch v2 02/16] block-remus: introduce colo mode Wen Congyang
2013-07-11 8:35 ` [RFC Patch v2 03/16] block-remus: introduce a interface to allow the user specify which mode the backup end uses Wen Congyang
2013-07-11 8:35 ` [RFC Patch v2 04/16] dominfo.completeRestore() will be called more than once in colo mode Wen Congyang
2013-07-11 8:35 ` [RFC Patch v2 05/16] xc_domain_restore: introduce restore_callbacks for colo Wen Congyang
2013-07-11 8:35 ` [RFC Patch v2 06/16] colo: implement restore_callbacks init()/free() Wen Congyang
2013-07-11 8:35 ` [RFC Patch v2 07/16] colo: implement restore_callbacks get_page() Wen Congyang
2013-07-11 8:35 ` [RFC Patch v2 08/16] colo: implement restore_callbacks flush_memory Wen Congyang
2013-07-11 8:35 ` [RFC Patch v2 09/16] colo: implement restore_callbacks update_p2m() Wen Congyang
2013-07-11 8:35 ` [RFC Patch v2 10/16] colo: implement restore_callbacks finish_restore() Wen Congyang
2013-07-11 9:40 ` Ian Campbell
2013-07-11 9:54 ` Wen Congyang
2013-07-11 8:35 ` [RFC Patch v2 11/16] xc_restore: implement for colo Wen Congyang
2013-07-11 8:35 ` [RFC Patch v2 12/16] XendCheckpoint: implement colo Wen Congyang
2013-07-11 8:35 ` [RFC Patch v2 13/16] xc_domain_save: flush cache before calling callbacks->postcopy() Wen Congyang
2013-07-11 13:43 ` Andrew Cooper
2013-07-12 1:36 ` Wen Congyang
2013-07-11 8:35 ` [RFC Patch v2 14/16] add callback to configure network for colo Wen Congyang
2013-07-11 8:35 ` [RFC Patch v2 15/16] xc_domain_save: implement save_callbacks " Wen Congyang
2013-07-11 13:52 ` Andrew Cooper
2013-07-11 8:35 ` [RFC Patch v2 16/16] remus: implement colo mode Wen Congyang
2013-07-11 9:37 ` [RFC Patch v2 00/16] COarse-grain LOck-stepping Virtual Machines for Non-stop Service Andrew Cooper
2013-07-11 9:40 ` Ian Campbell
2013-07-14 14:33 ` Shriram Rajagopalan
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=51DE7E65.6080507@citrix.com \
--to=andrew.cooper3@citrix.com \
--cc=bobby.hong@huawei.com \
--cc=eddie.dong@intel.com \
--cc=laijs@cn.fujitsu.com \
--cc=rshriram@cs.ubc.ca \
--cc=wei.ye1987@gmail.com \
--cc=wency@cn.fujitsu.com \
--cc=xen-devel@lists.xen.org \
--cc=xuyao.xu@huawei.com \
--cc=yunhong.jiang@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).