xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: AP Xen <apxeng@gmail.com>
To: George Dunlap <George.Dunlap@eu.citrix.com>
Cc: xen-devel@lists.xensource.com
Subject: Re: xl save -c issues with Windows 7 Ultimate
Date: Fri, 6 May 2011 15:01:15 -0700	[thread overview]
Message-ID: <BANLkTinp+=bYO9pLrRE5r5q_kESxnTSUxQ@mail.gmail.com> (raw)
In-Reply-To: <BANLkTimog_zCuGDvxYu=1+xweXWLeehDmg@mail.gmail.com>

On Tue, May 3, 2011 at 4:07 AM, George Dunlap
<George.Dunlap@eu.citrix.com> wrote:
> Have you tried it with other operating systems and found it to work?
> I.e., is it something specific to Windows 7, or is it a general HVM /
> Windows problem?

I tried this with CentOS 5.6 and saw the same behavior.

root@ubuntu:~# xl -vvv save -c centos /etc/xen/centoschk
Saving to /etc/xen/centoschk new xl format (info 0x0/0x0/255)
libxl: debug: libxl_dom.c:384:libxl__domain_suspend_common_callback
issuing PVHVM suspend request via XenBus control node
libxl: debug: libxl_dom.c:389:libxl__domain_suspend_common_callback
wait for the guest to acknowledge suspend request
libxl: debug: libxl_dom.c:434:libxl__domain_suspend_common_callback
guest acknowledged suspend request
libxl: debug: libxl_dom.c:438:libxl__domain_suspend_common_callback
wait for the guest to suspend
libxl: debug: libxl_dom.c:450:libxl__domain_suspend_common_callback
guest has suspended
xc: debug: outbuf_write: 4194304 > 90092@16687124
xc: debug: outbuf_write: 4194304 > 4169716@12607500
xc: debug: outbuf_write: 4194304 > 4169716@12607500
xc: debug: outbuf_write: 4194304 > 4169716@12607500
xc: debug: outbuf_write: 4194304 > 4169716@12607500
xc: debug: outbuf_write: 4194304 > 4169716@12607500
xc: debug: outbuf_write: 4194304 > 4169716@12607500
xc: debug: outbuf_write: 4194304 > 4169716@12607500
xc: debug: outbuf_write: 4194304 > 4169716@12607500
xc: debug: outbuf_write: 4194304 > 4169716@12607500
xc: debug: outbuf_write: 4194304 > 4169716@12607500
xc: debug: outbuf_write: 4194304 > 4169716@12607500
xc: debug: outbuf_write: 4194304 > 4169716@12607500
xc: debug: outbuf_write: 4194304 > 4169716@12607500
xc: debug: outbuf_write: 4194304 > 4169716@12607500
xc: debug: outbuf_write: 4194304 > 4169716@12607500
xc: debug: outbuf_write: 4194304 > 4169716@12607500
xc: debug: outbuf_write: 4194304 > 4169716@12607500
xc: debug: outbuf_write: 4194304 > 4169716@12607500
xc: debug: outbuf_write: 4194304 > 4169716@12607500
xc: debug: outbuf_write: 4194304 > 4169716@12607500
xc: debug: outbuf_write: 4194304 > 4169716@12607500
xc: debug: outbuf_write: 4194304 > 4169716@12607500
xc: debug: outbuf_write: 4194304 > 4169716@12607500
xc: debug: outbuf_write: 4194304 > 4169716@12607500
xc: debug: outbuf_write: 4194304 > 4169716@12607500
xc: debug: outbuf_write: 4194304 > 4169716@12607500
xc: debug: outbuf_write: 4194304 > 4169716@12607500
xc: debug: outbuf_write: 4194304 > 4169716@12607500
xc: debug: outbuf_write: 4194304 > 4169716@12607500
xc: debug: outbuf_write: 4194304 > 4169716@12607500
xc: debug: outbuf_write: 4194304 > 4169716@12607500
xc: debug: outbuf_write: 4194304 > 4169716@12607500
xc: debug: outbuf_write: 4194304 > 4169716@12607500
xc: debug: outbuf_write: 4194304 > 4169716@12607500
xc: debug: outbuf_write: 4194304 > 4169716@12607500
xc: debug: outbuf_write: 4194304 > 4169716@12607500
xc: debug: outbuf_write: 4194304 > 4169716@12607500
xc: debug: outbuf_write: 4194304 > 4169716@12607500
xc: debug: outbuf_write: 4194304 > 4169716@12607500
xc: debug: outbuf_write: 4194304 > 4169716@12607500
xc: debug: outbuf_write: 4194304 > 4169716@12607500
xc: debug: outbuf_write: 4194304 > 4169716@12607500
xc: debug: outbuf_write: 4194304 > 4169716@12607500
xc: debug: outbuf_write: 4194304 > 4169716@12607500
xc: debug: outbuf_write: 4194304 > 4169716@12607500
xc: debug: outbuf_write: 4194304 > 4169716@12607500
xc: debug: outbuf_write: 4194304 > 4169716@12607500
xc: debug: outbuf_write: 4194304 > 4169716@12607500
xc: debug: outbuf_write: 4194304 > 4169716@12607500
xc: debug: outbuf_write: 4194304 > 4169716@12607500
xc: debug: outbuf_write: 4194304 > 4169716@12607500
xc: debug: outbuf_write: 4194304 > 4169716@12607500
xc: debug: outbuf_write: 4194304 > 4169716@12607500
xc: debug: outbuf_write: 4194304 > 4169716@12607500
xc: debug: outbuf_write: 4194304 > 4169716@12607500
xc: debug: outbuf_write: 4194304 > 4169716@12607500
xc: debug: outbuf_write: 4194304 > 4169716@12607500
xc: debug: outbuf_write: 4194304 > 4169716@12607500
xc: debug: outbuf_write: 4194304 > 4169716@12607500
xc: debug: outbuf_write: 4194304 > 4169716@12607500
xc: debug: outbuf_write: 4194304 > 4169716@12607500
xc: debug: outbuf_write: 4194304 > 4169716@12607500
xc: debug: outbuf_write: 4194304 > 4169716@12607500
xc: debug: outbuf_write: 4194304 > 4169716@12607500
xc: debug: outbuf_write: 4194304 > 4169716@12607500
xc: debug: outbuf_write: 4194304 > 4169716@12607500
xc: debug: outbuf_write: 4194304 > 4169716@12607500
xc: debug: outbuf_write: 4194304 > 4169716@12607500
xc: debug: outbuf_write: 4194304 > 4169716@12607500
xc: debug: outbuf_write: 4194304 > 4169716@12607500
xc: debug: outbuf_write: 4194304 > 4169716@12607500
xc: debug: outbuf_write: 4194304 > 4169716@12607500
xc: debug: outbuf_write: 4194304 > 4169716@12607500
xc: debug: outbuf_write: 4194304 > 4169716@12607500
xc: debug: outbuf_write: 4194304 > 4169716@12607500
xc: debug: outbuf_write: 4194304 > 4169716@12607500
xc: debug: outbuf_write: 4194304 > 4169716@12607500
xc: debug: outbuf_write: 4194304 > 4169716@12607500
xc: debug: outbuf_write: 4194304 > 4169716@12607500
xc: debug: outbuf_write: 4194304 > 4169716@12607500
xc: debug: outbuf_write: 4194304 > 4169716@12607500
xc: debug: outbuf_write: 4194304 > 4169716@12607500
xc: debug: outbuf_write: 4194304 > 4169716@12607500
xc: detail: type fail: page 0 mfn 000f2000
xc: detail: type fail: page 1 mfn 000f2001
xc: detail: type fail: page 2 mfn 000f2002
xc: detail: delta 9371ms, dom0 25%, target 0%, sent 920Mb/s, dirtied
0Mb/s 0 pages
xc: detail: Total pages sent= 263168 (0.25x)
xc: detail: (of which 0 were fixups)
xc: detail: All memory is saved
xc: detail: Save exit rc=0
libxl: debug: libxl_dom.c:534:libxl__domain_save_device_model Saving
device model state to /var/lib/xen/qemu-save.7
libxl: debug: libxl_dom.c:546:libxl__domain_save_device_model Qemu
state is 7204 bytes


root@ubuntu:~# xl list
Name                                        ID   Mem VCPUs	State	Time(s)
Domain-0                                    0  2914     4     r-----    6576.4
centos                                        7  1019     2     ---ss-     575.4

  parent reply	other threads:[~2011-05-06 22:01 UTC|newest]

Thread overview: 15+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2011-04-29 19:28 xl save -c issues with Windows 7 Ultimate AP Xen
2011-05-03 11:07 ` George Dunlap
2011-05-03 19:04   ` AP Xen
2011-05-06 22:01   ` AP Xen [this message]
2011-05-03 13:01 ` Ian Campbell
2011-05-03 14:09   ` Shriram Rajagopalan
2011-05-03 14:31     ` Ian Campbell
2011-05-03 14:42       ` Tim Deegan
2011-05-03 16:32         ` Shriram Rajagopalan
2011-05-03 16:35           ` Brendan Cully
2011-05-03 19:17     ` AP Xen
2011-05-05 14:42       ` Shriram Rajagopalan
2011-05-05 21:01         ` AP Xen
2011-05-06  2:31           ` Shriram Rajagopalan
2011-05-03 19:15   ` AP Xen

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='BANLkTinp+=bYO9pLrRE5r5q_kESxnTSUxQ@mail.gmail.com' \
    --to=apxeng@gmail.com \
    --cc=George.Dunlap@eu.citrix.com \
    --cc=xen-devel@lists.xensource.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).