xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: taojiang628 <taojiang628@163.com>
To: "Shriram Rajagopalan" <rshriram@gmail.com>
Cc: xen-devel <xen-devel@lists.xensource.com>
Subject: Re: Re: [Xen-devel] remus error
Date: Fri, 9 Jul 2010 09:02:18 +0800	[thread overview]
Message-ID: <201007090902180788266@163.com> (raw)
In-Reply-To: AANLkTilVJ2GX2LyBO0mYfI1qnTvwISKYzo0VXcAgEamU@mail.gmail.com


[-- Attachment #1.1: Type: text/plain, Size: 2156 bytes --]


Hello:
Thanks for your help. But there is no "suspend_evtchn_*" in /var/lib/xen/ . Anything else I can do for this problem.

2010-07-09 



taojiang628 



发件人: Shriram Rajagopalan 
发送时间: 2010-07-08  02:07:46 
收件人: taojiang628 
抄送: xen-devel 
主题: Re: [Xen-devel] remus error 
 
do
rm /var/lib/xen/suspend_evtchn_*
Its probably some stray lock file left over in the /var/lib/xen dir that is preventing remus from subscribing to the suspend evtchn.

This is assuming you have the xen-4.0-testing with the patch to create a lock file per domain. If not, note that you wont be able to run >1 remus vms simultaneously.


On Wed, Jul 7, 2010 at 1:54 AM, taojiang628 <taojiang628@163.com> wrote:

hello:
When I run this, I have this problem .Who can tell me what should I do. Thankyou!

[root@localhost ~]# remus --no-net centos5.4 192.168.10.190
Disk is not replicated: tap:aio:/mnt/disk.img,xvda,w
WARNING: suspend event channel unavailable, falling back to slow xenstore signal
ling
Had 0 unexplained entries in p2m table
 1: sent 130800, skipped 272, delta 46142ms, dom0 24%, target 0%, sent 92Mb/s, dirtied 0Mb/s 420 pages
 2: sent 420, skipped 0, delta 133ms, dom0 24%, target 0%, sent 103Mb/s, dirtied 0Mb/s 0 pages
 3: sent 0, skipped 0, Start last iteration
PROF: suspending at 1278492428.700331
SUSPEND shinfo 00000310
delta 25ms, dom0 88%, target 12%, sent 0Mb/s, dirtied 138Mb/s 106 pages
 4: sent 106, skipped 0, delta 3ms, dom0 100%, target 0%, sent 1157Mb/s, dirtied 1157Mb/s 106 pages
Total pages sent= 131326 (0.99x)
(of which 0 were fixups)
All memory is saved
PROF: resumed at 1278492428.731093
PROF: flushed memory at 1278492428.734409
PROF: suspending at 1278492428.930008
timeout polling fd
ERROR Internal error: Suspend request failed
ERROR Internal error: Domain appears not to have suspended
Save exit rc=1
2010-07-07 



taojiang628 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel





-- 
perception is but an offspring of its own self

[-- Attachment #1.2: Type: text/html, Size: 6652 bytes --]

[-- Attachment #2: Type: text/plain, Size: 138 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel

      reply	other threads:[~2010-07-09  1:02 UTC|newest]

Thread overview: 3+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2010-07-07  8:54 remus error taojiang628
2010-07-07 18:07 ` Shriram Rajagopalan
2010-07-09  1:02   ` taojiang628 [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=201007090902180788266@163.com \
    --to=taojiang628@163.com \
    --cc=rshriram@gmail.com \
    --cc=xen-devel@lists.xensource.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).