qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: Michael Roth <mdroth@linux.vnet.ibm.com>
To: Luiz Capitulino <lcapitulino@redhat.com>
Cc: Amit Shah <amit.shah@redhat.com>, qemu-devel <qemu-devel@nongnu.org>
Subject: Re: [Qemu-devel] [RFC] qemu-ga: Introduce guest-hibernate command
Date: Thu, 08 Dec 2011 21:18:00 -0600	[thread overview]
Message-ID: <4EE17DE8.5080503@linux.vnet.ibm.com> (raw)
In-Reply-To: <20111208165258.17ab95f7@doriath>

On 12/08/2011 12:52 PM, Luiz Capitulino wrote:
> This is basically suspend to disk on a Linux guest.
>
> Signed-off-by: Luiz Capitulino<lcapitulino@redhat.com>
> ---
>
> This is an RFC because I did it as simple as possible and I'm open to
> suggestions...
>
> Now, while testing this or even "echo disk>  /sys/power/state" I get several
> funny results. Some times qemu just dies after printing that message:
>
>   "Guest moved used index from 20151 to 1"
>
> Some times it doesn't die, but I'm unable to log into the guest: I type
> username&  password but the terminal kind of locks (the shell doesn't run).
>
> Some times it works...
>

Here's the tail-end of the trace...

virtio_queue_notify 237.880 vdev=0x7f11f4c8f5e0 n=0x0 vq=0x7f11f4cb4d40
virtqueue_pop 3.701 vq=0x7f11f4cb4d40 elem=0x7f11f5966118 in_num=0x2 
out_num=0x1
virtio_blk_rw_complete 51.613 req=0x7f11f5966110 ret=0x0
virtio_blk_req_complete 1.327 req=0x7f11f5966110 status=0x0
virtqueue_fill 1.187 vq=0x7f11f4cb4d40 elem=0x7f11f5966118 len=0x1001 
idx=0x0
virtqueue_flush 1.746 vq=0x7f11f4cb4d40 count=0x1
virtio_notify 1.537 vdev=0x7f11f4c8f5e0 vq=0x7f11f4cb4d40
virtio_queue_notify 374.978 vdev=0x7f11f4c8f5e0 n=0x0 vq=0x7f11f4cb4d40
virtqueue_pop 3.631 vq=0x7f11f4cb4d40 elem=0x7f11f5faec58 in_num=0x2 
out_num=0x1
virtio_blk_rw_complete 49.029 req=0x7f11f5faec50 ret=0x0
virtio_blk_req_complete 1.327 req=0x7f11f5faec50 status=0x0
virtqueue_fill 1.327 vq=0x7f11f4cb4d40 elem=0x7f11f5faec58 len=0x1001 
idx=0x0
virtqueue_flush 1.746 vq=0x7f11f4cb4d40 count=0x1
virtio_notify 1.397 vdev=0x7f11f4c8f5e0 vq=0x7f11f4cb4d40
virtio_queue_notify 245.073 vdev=0x7f11f4c8f5e0 n=0x0 vq=0x7f11f4cb4d40
virtqueue_pop 3.631 vq=0x7f11f4cb4d40 elem=0x7f11f5966118 in_num=0x2 
out_num=0x1
virtio_blk_rw_complete 47.702 req=0x7f11f5966110 ret=0x0
virtio_blk_req_complete 1.257 req=0x7f11f5966110 status=0x0
virtqueue_fill 1.327 vq=0x7f11f4cb4d40 elem=0x7f11f5966118 len=0x1001 
idx=0x0
virtqueue_flush 1.816 vq=0x7f11f4cb4d40 count=0x1
virtio_notify 1.327 vdev=0x7f11f4c8f5e0 vq=0x7f11f4cb4d40
virtio_queue_notify 450.616 vdev=0x7f11f4c8f5e0 n=0x0 vq=0x7f11f4cb4d40
virtqueue_pop 4.051 vq=0x7f11f4cb4d40 elem=0x7f11f5faec58 in_num=0x2 
out_num=0x1
virtio_blk_rw_complete 67.885 req=0x7f11f5faec50 ret=0x0
virtio_blk_req_complete 1.327 req=0x7f11f5faec50 status=0x0
virtqueue_fill 1.327 vq=0x7f11f4cb4d40 elem=0x7f11f5faec58 len=0x1001 
idx=0x0

...
virtio_queue_notify 374.978 vdev=0x7f11f4c8f5e0 n=0x0 vq=0x7f11f4cb4d40
virtqueue_pop 3.631 vq=0x7f11f4cb4d40 elem=0x7f11f5faec58 in_num=0x2 
out_num=0x1
virtio_blk_rw_complete 49.029 req=0x7f11f5faec50 ret=0x0
virtio_blk_req_complete 1.327 req=0x7f11f5faec50 status=0x0
virtqueue_fill 1.327 vq=0x7f11f4cb4d40 elem=0x7f11f5faec58 len=0x1001 
idx=0x0
virtqueue_flush 1.746 vq=0x7f11f4cb4d40 count=0x1
virtio_notify 1.397 vdev=0x7f11f4c8f5e0 vq=0x7f11f4cb4d40
virtio_queue_notify 245.073 vdev=0x7f11f4c8f5e0 n=0x0 vq=0x7f11f4cb4d40
virtqueue_pop 3.631 vq=0x7f11f4cb4d40 elem=0x7f11f5966118 in_num=0x2 
out_num=0x1
virtio_blk_rw_complete 47.702 req=0x7f11f5966110 ret=0x0
virtio_blk_req_complete 1.257 req=0x7f11f5966110 status=0x0
virtqueue_fill 1.327 vq=0x7f11f4cb4d40 elem=0x7f11f5966118 len=0x1001 
idx=0x0
virtqueue_flush 1.816 vq=0x7f11f4cb4d40 count=0x1
virtio_notify 1.327 vdev=0x7f11f4c8f5e0 vq=0x7f11f4cb4d40
virtio_queue_notify 450.616 vdev=0x7f11f4c8f5e0 n=0x0 vq=0x7f11f4cb4d40
virtqueue_pop 4.051 vq=0x7f11f4cb4d40 elem=0x7f11f5faec58 in_num=0x2 
out_num=0x1
virtio_blk_rw_complete 67.885 req=0x7f11f5faec50 ret=0x0
virtio_blk_req_complete 1.327 req=0x7f11f5faec50 status=0x0
virtqueue_fill 1.327 vq=0x7f11f4cb4d40 elem=0x7f11f5faec58 len=0x1001 
idx=0x0
virtqueue_flush 1.607 vq=0x7f11f4cb4d40 count=0x1
virtio_notify 1.257 vdev=0x7f11f4c8f5e0 vq=0x7f11f4cb4d40
virtio_queue_notify 196.813 vdev=0x7f11f4c8f5e0 n=0x0 vq=0x7f11f4cb4d40
virtqueue_pop 3.562 vq=0x7f11f4cb4d40 elem=0x7f11f5966118 in_num=0x2 
out_num=0x1
virtio_blk_rw_complete 47.492 req=0x7f11f5966110 ret=0x0
virtio_blk_req_complete 1.327 req=0x7f11f5966110 status=0x0
virtqueue_fill 1.327 vq=0x7f11f4cb4d40 elem=0x7f11f5966118 len=0x1001 
idx=0x0
virtqueue_flush 1.676 vq=0x7f11f4cb4d40 count=0x1
virtio_notify 1.397 vdev=0x7f11f4c8f5e0 vq=0x7f11f4cb4d40
virtio_queue_notify 882289.570 vdev=0x7f11f4c8f5e0 n=0x0 vq=0x7f11f4cb4d40

It doesn't seem to tell us much...but there's a bunch of successful 
reads before the final virtio_queue_notify, and that notify takes quite 
a bit longer than the previous ones. I can only speculate at this point, 
but I would guess this is when the guest has completed loading the saved 
memory from disk and it attempting to restore the previous state..

In the kernel there's a virtio_pci_suspend() PM callback that seems to 
get called around this time and restores the PCI config from 
virtio_pci_resume(). Could that be switching us to an older vring and 
throwing the QEMU side out of whack?


>   qapi-schema-guest.json     |   11 +++++++++++
>   qga/guest-agent-commands.c |   19 +++++++++++++++++++
>   2 files changed, 30 insertions(+), 0 deletions(-)
>
> diff --git a/qapi-schema-guest.json b/qapi-schema-guest.json
> index fde5971..2c5bbcf 100644
> --- a/qapi-schema-guest.json
> +++ b/qapi-schema-guest.json
> @@ -215,3 +215,14 @@
>   ##
>   { 'command': 'guest-fsfreeze-thaw',
>     'returns': 'int' }
> +
> +##
> +# @guest-hibernate
> +#
> +# Save RAM contents to disk and powerdown the guest.
> +#
> +# Notes: This command doesn't return on success.
> +#
> +# Since: 1.1
> +##
> +{ 'command': 'guest-hibernate' }
> diff --git a/qga/guest-agent-commands.c b/qga/guest-agent-commands.c
> index 6da9904..9dd4060 100644
> --- a/qga/guest-agent-commands.c
> +++ b/qga/guest-agent-commands.c
> @@ -550,6 +550,25 @@ int64_t qmp_guest_fsfreeze_thaw(Error **err)
>   }
>   #endif
>
> +#define LINUX_SYS_STATE_FILE "/sys/power/state"
> +
> +void qmp_guest_hibernate(Error **err)
> +{
> +    int fd;
> +
> +    fd = open(LINUX_SYS_STATE_FILE, O_WRONLY);
> +    if (fd<  0) {
> +        error_set(err, QERR_OPEN_FILE_FAILED, LINUX_SYS_STATE_FILE);
> +        return;
> +    }
> +
> +    if (write(fd, "disk", 4)<  0) {
> +        error_set(err, QERR_UNDEFINED_ERROR);
> +    }
> +
> +    close(fd);
> +}
> +
>   /* register init/cleanup routines for stateful command groups */
>   void ga_command_state_init(GAState *s, GACommandState *cs)
>   {

  parent reply	other threads:[~2011-12-09  3:18 UTC|newest]

Thread overview: 12+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2011-12-08 18:52 [Qemu-devel] [RFC] qemu-ga: Introduce guest-hibernate command Luiz Capitulino
2011-12-08 19:07 ` Daniel P. Berrange
2011-12-08 19:16   ` Luiz Capitulino
2011-12-08 23:11 ` Andreas Färber
2011-12-09  1:14 ` Michael Roth
2011-12-09 12:23   ` Luiz Capitulino
2011-12-09  3:18 ` Michael Roth [this message]
2011-12-09 12:22   ` Luiz Capitulino
2011-12-09 12:35     ` Amit Shah
2011-12-09  5:01 ` Amit Shah
2011-12-11 10:00 ` Dor Laor
2011-12-12 12:39   ` Luiz Capitulino

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=4EE17DE8.5080503@linux.vnet.ibm.com \
    --to=mdroth@linux.vnet.ibm.com \
    --cc=amit.shah@redhat.com \
    --cc=lcapitulino@redhat.com \
    --cc=qemu-devel@nongnu.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).