From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([140.186.70.92]:52868) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1RYqyR-0008Bd-G9 for qemu-devel@nongnu.org; Thu, 08 Dec 2011 22:18:16 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1RYqyM-0003Sd-NI for qemu-devel@nongnu.org; Thu, 08 Dec 2011 22:18:11 -0500 Received: from e36.co.us.ibm.com ([32.97.110.154]:36667) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1RYqyM-0003S2-8J for qemu-devel@nongnu.org; Thu, 08 Dec 2011 22:18:06 -0500 Received: from /spool/local by e36.co.us.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Thu, 8 Dec 2011 20:18:04 -0700 Received: from d03av04.boulder.ibm.com (d03av04.boulder.ibm.com [9.17.195.170]) by d03relay05.boulder.ibm.com (8.13.8/8.13.8/NCO v10.0) with ESMTP id pB93I2wL100914 for ; Thu, 8 Dec 2011 20:18:02 -0700 Received: from d03av04.boulder.ibm.com (loopback [127.0.0.1]) by d03av04.boulder.ibm.com (8.14.4/8.13.1/NCO v10.0 AVout) with ESMTP id pB93I1Yk017537 for ; Thu, 8 Dec 2011 20:18:01 -0700 Message-ID: <4EE17DE8.5080503@linux.vnet.ibm.com> Date: Thu, 08 Dec 2011 21:18:00 -0600 From: Michael Roth MIME-Version: 1.0 References: <20111208165258.17ab95f7@doriath> In-Reply-To: <20111208165258.17ab95f7@doriath> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Subject: Re: [Qemu-devel] [RFC] qemu-ga: Introduce guest-hibernate command List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Luiz Capitulino Cc: Amit Shah , qemu-devel On 12/08/2011 12:52 PM, Luiz Capitulino wrote: > This is basically suspend to disk on a Linux guest. > > Signed-off-by: Luiz Capitulino > --- > > This is an RFC because I did it as simple as possible and I'm open to > suggestions... > > Now, while testing this or even "echo disk> /sys/power/state" I get several > funny results. Some times qemu just dies after printing that message: > > "Guest moved used index from 20151 to 1" > > Some times it doesn't die, but I'm unable to log into the guest: I type > username& password but the terminal kind of locks (the shell doesn't run). > > Some times it works... > Here's the tail-end of the trace... virtio_queue_notify 237.880 vdev=0x7f11f4c8f5e0 n=0x0 vq=0x7f11f4cb4d40 virtqueue_pop 3.701 vq=0x7f11f4cb4d40 elem=0x7f11f5966118 in_num=0x2 out_num=0x1 virtio_blk_rw_complete 51.613 req=0x7f11f5966110 ret=0x0 virtio_blk_req_complete 1.327 req=0x7f11f5966110 status=0x0 virtqueue_fill 1.187 vq=0x7f11f4cb4d40 elem=0x7f11f5966118 len=0x1001 idx=0x0 virtqueue_flush 1.746 vq=0x7f11f4cb4d40 count=0x1 virtio_notify 1.537 vdev=0x7f11f4c8f5e0 vq=0x7f11f4cb4d40 virtio_queue_notify 374.978 vdev=0x7f11f4c8f5e0 n=0x0 vq=0x7f11f4cb4d40 virtqueue_pop 3.631 vq=0x7f11f4cb4d40 elem=0x7f11f5faec58 in_num=0x2 out_num=0x1 virtio_blk_rw_complete 49.029 req=0x7f11f5faec50 ret=0x0 virtio_blk_req_complete 1.327 req=0x7f11f5faec50 status=0x0 virtqueue_fill 1.327 vq=0x7f11f4cb4d40 elem=0x7f11f5faec58 len=0x1001 idx=0x0 virtqueue_flush 1.746 vq=0x7f11f4cb4d40 count=0x1 virtio_notify 1.397 vdev=0x7f11f4c8f5e0 vq=0x7f11f4cb4d40 virtio_queue_notify 245.073 vdev=0x7f11f4c8f5e0 n=0x0 vq=0x7f11f4cb4d40 virtqueue_pop 3.631 vq=0x7f11f4cb4d40 elem=0x7f11f5966118 in_num=0x2 out_num=0x1 virtio_blk_rw_complete 47.702 req=0x7f11f5966110 ret=0x0 virtio_blk_req_complete 1.257 req=0x7f11f5966110 status=0x0 virtqueue_fill 1.327 vq=0x7f11f4cb4d40 elem=0x7f11f5966118 len=0x1001 idx=0x0 virtqueue_flush 1.816 vq=0x7f11f4cb4d40 count=0x1 virtio_notify 1.327 vdev=0x7f11f4c8f5e0 vq=0x7f11f4cb4d40 virtio_queue_notify 450.616 vdev=0x7f11f4c8f5e0 n=0x0 vq=0x7f11f4cb4d40 virtqueue_pop 4.051 vq=0x7f11f4cb4d40 elem=0x7f11f5faec58 in_num=0x2 out_num=0x1 virtio_blk_rw_complete 67.885 req=0x7f11f5faec50 ret=0x0 virtio_blk_req_complete 1.327 req=0x7f11f5faec50 status=0x0 virtqueue_fill 1.327 vq=0x7f11f4cb4d40 elem=0x7f11f5faec58 len=0x1001 idx=0x0 ... virtio_queue_notify 374.978 vdev=0x7f11f4c8f5e0 n=0x0 vq=0x7f11f4cb4d40 virtqueue_pop 3.631 vq=0x7f11f4cb4d40 elem=0x7f11f5faec58 in_num=0x2 out_num=0x1 virtio_blk_rw_complete 49.029 req=0x7f11f5faec50 ret=0x0 virtio_blk_req_complete 1.327 req=0x7f11f5faec50 status=0x0 virtqueue_fill 1.327 vq=0x7f11f4cb4d40 elem=0x7f11f5faec58 len=0x1001 idx=0x0 virtqueue_flush 1.746 vq=0x7f11f4cb4d40 count=0x1 virtio_notify 1.397 vdev=0x7f11f4c8f5e0 vq=0x7f11f4cb4d40 virtio_queue_notify 245.073 vdev=0x7f11f4c8f5e0 n=0x0 vq=0x7f11f4cb4d40 virtqueue_pop 3.631 vq=0x7f11f4cb4d40 elem=0x7f11f5966118 in_num=0x2 out_num=0x1 virtio_blk_rw_complete 47.702 req=0x7f11f5966110 ret=0x0 virtio_blk_req_complete 1.257 req=0x7f11f5966110 status=0x0 virtqueue_fill 1.327 vq=0x7f11f4cb4d40 elem=0x7f11f5966118 len=0x1001 idx=0x0 virtqueue_flush 1.816 vq=0x7f11f4cb4d40 count=0x1 virtio_notify 1.327 vdev=0x7f11f4c8f5e0 vq=0x7f11f4cb4d40 virtio_queue_notify 450.616 vdev=0x7f11f4c8f5e0 n=0x0 vq=0x7f11f4cb4d40 virtqueue_pop 4.051 vq=0x7f11f4cb4d40 elem=0x7f11f5faec58 in_num=0x2 out_num=0x1 virtio_blk_rw_complete 67.885 req=0x7f11f5faec50 ret=0x0 virtio_blk_req_complete 1.327 req=0x7f11f5faec50 status=0x0 virtqueue_fill 1.327 vq=0x7f11f4cb4d40 elem=0x7f11f5faec58 len=0x1001 idx=0x0 virtqueue_flush 1.607 vq=0x7f11f4cb4d40 count=0x1 virtio_notify 1.257 vdev=0x7f11f4c8f5e0 vq=0x7f11f4cb4d40 virtio_queue_notify 196.813 vdev=0x7f11f4c8f5e0 n=0x0 vq=0x7f11f4cb4d40 virtqueue_pop 3.562 vq=0x7f11f4cb4d40 elem=0x7f11f5966118 in_num=0x2 out_num=0x1 virtio_blk_rw_complete 47.492 req=0x7f11f5966110 ret=0x0 virtio_blk_req_complete 1.327 req=0x7f11f5966110 status=0x0 virtqueue_fill 1.327 vq=0x7f11f4cb4d40 elem=0x7f11f5966118 len=0x1001 idx=0x0 virtqueue_flush 1.676 vq=0x7f11f4cb4d40 count=0x1 virtio_notify 1.397 vdev=0x7f11f4c8f5e0 vq=0x7f11f4cb4d40 virtio_queue_notify 882289.570 vdev=0x7f11f4c8f5e0 n=0x0 vq=0x7f11f4cb4d40 It doesn't seem to tell us much...but there's a bunch of successful reads before the final virtio_queue_notify, and that notify takes quite a bit longer than the previous ones. I can only speculate at this point, but I would guess this is when the guest has completed loading the saved memory from disk and it attempting to restore the previous state.. In the kernel there's a virtio_pci_suspend() PM callback that seems to get called around this time and restores the PCI config from virtio_pci_resume(). Could that be switching us to an older vring and throwing the QEMU side out of whack? > qapi-schema-guest.json | 11 +++++++++++ > qga/guest-agent-commands.c | 19 +++++++++++++++++++ > 2 files changed, 30 insertions(+), 0 deletions(-) > > diff --git a/qapi-schema-guest.json b/qapi-schema-guest.json > index fde5971..2c5bbcf 100644 > --- a/qapi-schema-guest.json > +++ b/qapi-schema-guest.json > @@ -215,3 +215,14 @@ > ## > { 'command': 'guest-fsfreeze-thaw', > 'returns': 'int' } > + > +## > +# @guest-hibernate > +# > +# Save RAM contents to disk and powerdown the guest. > +# > +# Notes: This command doesn't return on success. > +# > +# Since: 1.1 > +## > +{ 'command': 'guest-hibernate' } > diff --git a/qga/guest-agent-commands.c b/qga/guest-agent-commands.c > index 6da9904..9dd4060 100644 > --- a/qga/guest-agent-commands.c > +++ b/qga/guest-agent-commands.c > @@ -550,6 +550,25 @@ int64_t qmp_guest_fsfreeze_thaw(Error **err) > } > #endif > > +#define LINUX_SYS_STATE_FILE "/sys/power/state" > + > +void qmp_guest_hibernate(Error **err) > +{ > + int fd; > + > + fd = open(LINUX_SYS_STATE_FILE, O_WRONLY); > + if (fd< 0) { > + error_set(err, QERR_OPEN_FILE_FAILED, LINUX_SYS_STATE_FILE); > + return; > + } > + > + if (write(fd, "disk", 4)< 0) { > + error_set(err, QERR_UNDEFINED_ERROR); > + } > + > + close(fd); > +} > + > /* register init/cleanup routines for stateful command groups */ > void ga_command_state_init(GAState *s, GACommandState *cs) > {