xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: Yufang Zhang <yuzhang@redhat.com>
To: xen-devel <xen-devel@lists.xensource.com>
Subject: elapse time computing when restarting VM?
Date: Sun, 15 Aug 2010 08:27:24 -0400 (EDT)	[thread overview]
Message-ID: <1586042608.3892661281875244077.JavaMail.root@zmail02.collab.prod.int.phx2.redhat.com> (raw)
In-Reply-To: <1822033966.3892641281875172829.JavaMail.root@zmail02.collab.prod.int.phx2.redhat.com>

Hi all,
Currently, xend would compute elapse time since vm starts before restarting a vm. If the elapse time is larger than MINIMUM_RESTART_TIME (which is 60s), xend would refuse to restart the vm but destroy it to avoid loops. However, when a guest crashes at boot time and enable-dump is enabled, core dump is done before restarting the guest which may take quite a while (depends on memory size of the guest). At this situation, elapse time computed is expanded thus xend wouldn't destory the guest. Then the guest drops into a restart-crash-dumpcore loop, which is either a waist of cpu time or *disk space* of Domain0.  Actually, I have hit this problem when I upgraded a 2048M guest to a problematic kernel. The guest crashed at boot time and core dump was done for it, after which the guest rebooted and go-through the previous steps. My domain0 was full of core dump files of that guest. So does it make sense to figure out a way to solve the problem but not just enlarging MINIMUM_RESTART_TIME? Is the following patch reasonable? 

diff -r 774dfc178c39 tools/python/xen/xend/XendDomainInfo.py
--- a/tools/python/xen/xend/XendDomainInfo.py   Thu Aug 12 17:06:21 2010 +0100
+++ b/tools/python/xen/xend/XendDomainInfo.py   Mon Aug 16 12:16:45 2010 +0800
@@ -2060,7 +2060,7 @@
                 log.warn('Domain has crashed: name=%s id=%d.',
                          self.info['name_label'], self.domid)
                 self._writeVm(LAST_SHUTDOWN_REASON, 'crash')
-
+                self.info['crash_time'] = time.time()
                 restart_reason = 'crash'
                 self._stateSet(DOM_STATE_HALTED)

@@ -2188,7 +2188,12 @@
         old_domid = self.domid
         self._writeVm(RESTART_IN_PROGRESS, 'True')

-        elapse = time.time() - self.info['start_time']
+        if xoptions.get_enable_dump() or self.get_on_crash() \
+               in ['coredump_and_destroy', 'coredump_and_restart']:
+            elapse = self.info['crash_time'] - self.info['start_time']
+        else:
+            elapse = time.time() - self.info['start_time']
+
         if elapse < MINIMUM_RESTART_TIME:
             log.error('VM %s restarting too fast (Elapsed time: %f seconds). '
                       'Refusing to restart to avoid loops.',

I have test the situation with the patch, and it works well when the guest crashes at boot time.

Best Regards.

Yufang

           reply	other threads:[~2010-08-15 12:27 UTC|newest]

Thread overview: expand[flat|nested]  mbox.gz  Atom feed
 [parent not found: <1822033966.3892641281875172829.JavaMail.root@zmail02.collab.prod.int.phx2.redhat.com>]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1586042608.3892661281875244077.JavaMail.root@zmail02.collab.prod.int.phx2.redhat.com \
    --to=yuzhang@redhat.com \
    --cc=xen-devel@lists.xensource.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).