qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
* [Qemu-devel] Hibernate and qemu-nbd
@ 2013-09-17 14:10 Mark Trumpold
  2013-09-18 13:12 ` Stefan Hajnoczi
  0 siblings, 1 reply; 9+ messages in thread
From: Mark Trumpold @ 2013-09-17 14:10 UTC (permalink / raw)
  To: qemu-devel@nongnu.org

Hello,

I have been using 'qemu-nbd' and 'qemu-img' for some time to provide
loop filesystems in my environment.

Recently I have been experimenting with hibernating (suspend to disk)
the physical host on which I have qemu running.

I am using the kernel functionality directly with the commands:
    echo platform >/sys/power/disk
    echo disk >/sys/power/state

The following appears in dmesg when I attempt to hibernate:

====================================================
[   38.881397] nbd (pid 1473: qemu-nbd) got signal 0
[   38.881401] block nbd0: shutting down socket
[   38.881404] block nbd0: Receive control failed (result -4)
[   38.881417] block nbd0: queue cleared
[   87.463133] block nbd0: Attempted send on closed socket
[   87.463137] end_request: I/O error, dev nbd0, sector 66824
====================================================

My environment:
  Debian: 6.0.5
  Kernel: 3.3.1
  Qemu userspace: 1.2.0

Thank you for any thoughts on this one.

Regards,
Mark Trumpold

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [Qemu-devel] Hibernate and qemu-nbd
  2013-09-17 14:10 Mark Trumpold
@ 2013-09-18 13:12 ` Stefan Hajnoczi
  0 siblings, 0 replies; 9+ messages in thread
From: Stefan Hajnoczi @ 2013-09-18 13:12 UTC (permalink / raw)
  To: Mark Trumpold
  Cc: nbd-general, w, bonzini, Paul Clements, qemu-devel@nongnu.org

On Tue, Sep 17, 2013 at 07:10:44AM -0700, Mark Trumpold wrote:
> I am using the kernel functionality directly with the commands:
>     echo platform >/sys/power/disk
>     echo disk >/sys/power/state
> 
> The following appears in dmesg when I attempt to hibernate:
> 
> ====================================================
> [   38.881397] nbd (pid 1473: qemu-nbd) got signal 0
> [   38.881401] block nbd0: shutting down socket
> [   38.881404] block nbd0: Receive control failed (result -4)
> [   38.881417] block nbd0: queue cleared
> [   87.463133] block nbd0: Attempted send on closed socket
> [   87.463137] end_request: I/O error, dev nbd0, sector 66824
> ====================================================
> 
> My environment:
>   Debian: 6.0.5
>   Kernel: 3.3.1
>   Qemu userspace: 1.2.0

This could be a bug in the nbd client kernel module.
drivers/block/nbd.c:sock_xmit() does the following:

            result = kernel_recvmsg(sock, &msg, &iov, 1, size,
                                    msg.msg_flags);

    if (signal_pending(current)) {
            siginfo_t info;
            printk(KERN_WARNING "nbd (pid %d: %s) got signal %d\n",
                    task_pid_nr(current), current->comm,
                    dequeue_signal_lock(current, &current->blocked, &info));
            result = -EINTR;
            sock_shutdown(nbd, !send);
            break;
    }

The signal number in the log output looks bogus, we shouldn't get 0.
sock_xmit() actually blocks all signals except SIGKILL before calling
kernel_recvmsg().  I guess this is an artifact of the suspend-to-disk
operation, maybe the signal pending flag is set on the process.

Perhaps someone with a better understanding of the kernel internals can
check this?

What happens next is that the nbd kernel module shuts down the NBD connection.

As a workaround, please try running a separate nbd-client(1) process and drop
the qemu-nbd -c command-line argument.  This way nbd-client(1) uses the
nbd kernel module instead of the qemu-nbd process and you'll get the
benefit of nbd-client's automatic reconnect.

Stefan

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [Qemu-devel] Hibernate and qemu-nbd
@ 2013-09-19 20:44 Mark Trumpold
  2013-09-20  5:14 ` Stefan Hajnoczi
  0 siblings, 1 reply; 9+ messages in thread
From: Mark Trumpold @ 2013-09-19 20:44 UTC (permalink / raw)
  To: Stefan Hajnoczi, Mark Trumpold
  Cc: nbd-general, w, bonzini, Paul Clements, qemu-devel


>-----Original Message-----
>From: Stefan Hajnoczi [mailto:stefanha@gmail.com]
>Sent: Wednesday, September 18, 2013 06:12 AM
>To: 'Mark Trumpold'
>Cc: qemu-devel@nongnu.org, 'Paul Clements', nbd-general@lists.sourceforge.net, 
>bonzini@stefanha-thinkpad.redhat.com, w@uter.be
>Subject: Re: [Qemu-devel] Hibernate and qemu-nbd
>
>On Tue, Sep 17, 2013 at 07:10:44AM -0700, Mark Trumpold wrote:
>> I am using the kernel functionality directly with the commands:
>>     echo platform >/sys/power/disk
>>     echo disk >/sys/power/state
>> 
>> The following appears in dmesg when I attempt to hibernate:
>> 
>> ====================================================
>> [   38.881397] nbd (pid 1473: qemu-nbd) got signal 0
>> [   38.881401] block nbd0: shutting down socket
>> [   38.881404] block nbd0: Receive control failed (result -4)
>> [   38.881417] block nbd0: queue cleared
>> [   87.463133] block nbd0: Attempted send on closed socket
>> [   87.463137] end_request: I/O error, dev nbd0, sector 66824
>> ====================================================
>> 
>> My environment:
>>   Debian: 6.0.5
>>   Kernel: 3.3.1
>>   Qemu userspace: 1.2.0
>
>This could be a bug in the nbd client kernel module.
>drivers/block/nbd.c:sock_xmit() does the following:
>
>            result = kernel_recvmsg(sock, &msg, &iov, 1, size,
>                                    msg.msg_flags);
>
>    if (signal_pending(current)) {
>            siginfo_t info;
>            printk(KERN_WARNING "nbd (pid %d: %s) got signal %d\n",
>                    task_pid_nr(current), current->comm,
>                    dequeue_signal_lock(current, &current->blocked, &info));
>            result = -EINTR;
>            sock_shutdown(nbd, !send);
>            break;
>    }
>
>The signal number in the log output looks bogus, we shouldn't get 0.
>sock_xmit() actually blocks all signals except SIGKILL before calling
>kernel_recvmsg().  I guess this is an artifact of the suspend-to-disk
>operation, maybe the signal pending flag is set on the process.
>
>Perhaps someone with a better understanding of the kernel internals can
>check this?
>
>What happens next is that the nbd kernel module shuts down the NBD connection.
>
>As a workaround, please try running a separate nbd-client(1) process and drop
>the qemu-nbd -c command-line argument.  This way nbd-client(1) uses the
>nbd kernel module instead of the qemu-nbd process and you'll get the
>benefit of nbd-client's automatic reconnect.
>
>Stefan
>

Hi Stefan,

Thank you for the information.

I did some experiments per you suggestion.  Wasn't sure if the following
was what you had in mind:

1) Configured 'nbd-server' and started (/etc/nbd-server/config):
  [generic]
  [export]
    exportname = /root/qemu/q1.img
    port = 2000

2) Started 'nbd-client':
   -> nbd-client localhost 2000 /dev/nbd0

3) Verify '/dev/nbd0' is in use (will appear in list):
   -> cat /proc/partitions

At this point I could mount '/dev/nbd0' as expected, but not necessary
to demonstrate a problem.

Now at this point if I enter S1(standby), S3(suspend to ram), or
S4(suspend to disk) I get the same dmesg as before indicating
'nbd0' caught signal 0 and exited.

When I resume I simply repeat step #3 to verify.

==================

Also, previously before contacting the group I had modified the same
kernel source that you had identified in 'drivers/block/nbd.c:sock_xmit()'
to not take any action.  This was strictly for troubleshooting:

199            result = kernel_recvmsg(sock, &msg, &iov, 1, size,
200                                    msg.msg_flags);
201
202    if (signal_pending(current)) {
203            siginfo_t info;
204            printk(KERN_WARNING "nbd (pid %d: %s) got signal %d\n",
205                    task_pid_nr(current), current->comm,
206                    dequeue_signal_lock(current, &current->blocked,&info)); 207
208            //result = -EINTR;
209            //sock_shutdown(nbd, !send);
210            //break;
211    }

We then got errors ("Wrong magac ...) in the following section:

/* NULL returned = something went wrong, inform userspace */
static struct request *nbd_read_stat(struct nbd_device *lo)
{
        int result;
        struct nbd_reply reply;
        struct request *req;

        reply.magic = 0;
        result = sock_xmit(lo, 0, &reply, sizeof(reply), MSG_WAITALL);
        if (result <= 0) {
                dev_err(disk_to_dev(lo->disk),
                        "Receive control failed (result %d)\n", result);
                goto harderror;
        }

        if (ntohl(reply.magic) != NBD_REPLY_MAGIC) {
                dev_err(disk_to_dev(lo->disk), "Wrong magic (0x%lx)\n",
                                (unsigned long)ntohl(reply.magic));
                result = -EPROTO;
                goto harderror;


So, it seemed to me the call at line #199 above must be returning with
error after we commented out the signal action logic.

Thank you for your attention on this.
Let me know if I followed you suggestion correctly, and/or other tests
I can do.

Regards,
Mark T.

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [Qemu-devel] Hibernate and qemu-nbd
  2013-09-19 20:44 [Qemu-devel] " Mark Trumpold
@ 2013-09-20  5:14 ` Stefan Hajnoczi
  0 siblings, 0 replies; 9+ messages in thread
From: Stefan Hajnoczi @ 2013-09-20  5:14 UTC (permalink / raw)
  To: Mark Trumpold; +Cc: nbd-general, w, bonzini, Paul Clements, qemu-devel

On Thu, Sep 19, 2013 at 10:44 PM, Mark Trumpold <markt@netqa.com> wrote:
>
>>-----Original Message-----
>>From: Stefan Hajnoczi [mailto:stefanha@gmail.com]
>>Sent: Wednesday, September 18, 2013 06:12 AM
>>To: 'Mark Trumpold'
>>Cc: qemu-devel@nongnu.org, 'Paul Clements', nbd-general@lists.sourceforge.net,
>>bonzini@stefanha-thinkpad.redhat.com, w@uter.be
>>Subject: Re: [Qemu-devel] Hibernate and qemu-nbd
>>
>>On Tue, Sep 17, 2013 at 07:10:44AM -0700, Mark Trumpold wrote:
>>> I am using the kernel functionality directly with the commands:
>>>     echo platform >/sys/power/disk
>>>     echo disk >/sys/power/state
>>>
>>> The following appears in dmesg when I attempt to hibernate:
>>>
>>> ====================================================
>>> [   38.881397] nbd (pid 1473: qemu-nbd) got signal 0
>>> [   38.881401] block nbd0: shutting down socket
>>> [   38.881404] block nbd0: Receive control failed (result -4)
>>> [   38.881417] block nbd0: queue cleared
>>> [   87.463133] block nbd0: Attempted send on closed socket
>>> [   87.463137] end_request: I/O error, dev nbd0, sector 66824
>>> ====================================================
>>>
>>> My environment:
>>>   Debian: 6.0.5
>>>   Kernel: 3.3.1
>>>   Qemu userspace: 1.2.0
>>
>>This could be a bug in the nbd client kernel module.
>>drivers/block/nbd.c:sock_xmit() does the following:
>>
>>            result = kernel_recvmsg(sock, &msg, &iov, 1, size,
>>                                    msg.msg_flags);
>>
>>    if (signal_pending(current)) {
>>            siginfo_t info;
>>            printk(KERN_WARNING "nbd (pid %d: %s) got signal %d\n",
>>                    task_pid_nr(current), current->comm,
>>                    dequeue_signal_lock(current, &current->blocked, &info));
>>            result = -EINTR;
>>            sock_shutdown(nbd, !send);
>>            break;
>>    }
>>
>>The signal number in the log output looks bogus, we shouldn't get 0.
>>sock_xmit() actually blocks all signals except SIGKILL before calling
>>kernel_recvmsg().  I guess this is an artifact of the suspend-to-disk
>>operation, maybe the signal pending flag is set on the process.
>>
>>Perhaps someone with a better understanding of the kernel internals can
>>check this?
>>
>>What happens next is that the nbd kernel module shuts down the NBD connection.
>>
>>As a workaround, please try running a separate nbd-client(1) process and drop
>>the qemu-nbd -c command-line argument.  This way nbd-client(1) uses the
>>nbd kernel module instead of the qemu-nbd process and you'll get the
>>benefit of nbd-client's automatic reconnect.
>>
>>Stefan
>>
>
> Hi Stefan,
>
> Thank you for the information.
>
> I did some experiments per you suggestion.  Wasn't sure if the following
> was what you had in mind:
>
> 1) Configured 'nbd-server' and started (/etc/nbd-server/config):
>   [generic]
>   [export]
>     exportname = /root/qemu/q1.img
>     port = 2000

You can use qemu-nbd instead of nbd-server.  This way you'll be able
to serve up qcow2 and other image formats.

Just avoid the qemu-nbd -c option.  This makes qemu-nbd purely run the
NBD network protocol and skips simultaneously running the kernel NBD
client.  (Since qemu-nbd doesn't reconnect when ioctl(NBD_DO_IT) fails
with EINTR the workaround is to use nbd-client(1) to drive the kernel
NBD client instead.)

> 2) Started 'nbd-client':
>    -> nbd-client localhost 2000 /dev/nbd0
>
> 3) Verify '/dev/nbd0' is in use (will appear in list):
>    -> cat /proc/partitions
>
> At this point I could mount '/dev/nbd0' as expected, but not necessary
> to demonstrate a problem.
>
> Now at this point if I enter S1(standby), S3(suspend to ram), or
> S4(suspend to disk) I get the same dmesg as before indicating
> 'nbd0' caught signal 0 and exited.
>
> When I resume I simply repeat step #3 to verify.

It's expected that you get the same kernel messages.  The difference
should be that /dev/nbd0 is still accessible after resuming from disk
because nbd-client automatically reconnects after the nbd kernel
module bails out with EINTR.

> ==================
>
> Also, previously before contacting the group I had modified the same
> kernel source that you had identified in 'drivers/block/nbd.c:sock_xmit()'
> to not take any action.  This was strictly for troubleshooting:
>
> 199            result = kernel_recvmsg(sock, &msg, &iov, 1, size,
> 200                                    msg.msg_flags);
> 201
> 202    if (signal_pending(current)) {
> 203            siginfo_t info;
> 204            printk(KERN_WARNING "nbd (pid %d: %s) got signal %d\n",
> 205                    task_pid_nr(current), current->comm,
> 206                    dequeue_signal_lock(current, &current->blocked,&info)); 207
> 208            //result = -EINTR;
> 209            //sock_shutdown(nbd, !send);
> 210            //break;
> 211    }
>
> We then got errors ("Wrong magac ...) in the following section:
>
> /* NULL returned = something went wrong, inform userspace */
> static struct request *nbd_read_stat(struct nbd_device *lo)
> {
>         int result;
>         struct nbd_reply reply;
>         struct request *req;
>
>         reply.magic = 0;
>         result = sock_xmit(lo, 0, &reply, sizeof(reply), MSG_WAITALL);
>         if (result <= 0) {
>                 dev_err(disk_to_dev(lo->disk),
>                         "Receive control failed (result %d)\n", result);
>                 goto harderror;
>         }
>
>         if (ntohl(reply.magic) != NBD_REPLY_MAGIC) {
>                 dev_err(disk_to_dev(lo->disk), "Wrong magic (0x%lx)\n",
>                                 (unsigned long)ntohl(reply.magic));
>                 result = -EPROTO;
>                 goto harderror;
>
>
> So, it seemed to me the call at line #199 above must be returning with
> error after we commented out the signal action logic.

I'm not familiar enough with the code to say what is happening.  As
the next step I would print out the kernel_recvmsg() return value when
the signal is pending and look into what happens during
suspend-to-disk (there's some sort of process freezing that takes
place).

Sorry I can't be of more help.  Hopefully someone more familiar with
the nbd kernel module will have time to chime in.

Stefan

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [Qemu-devel] Hibernate and qemu-nbd
@ 2013-09-20 18:00 Mark Trumpold
  2013-09-21  9:59 ` Wouter Verhelst
  0 siblings, 1 reply; 9+ messages in thread
From: Mark Trumpold @ 2013-09-20 18:00 UTC (permalink / raw)
  To: Stefan Hajnoczi, Mark Trumpold
  Cc: nbd-general, w, bonzini, Paul Clements, qemu-devel


>-----Original Message-----
>From: Stefan Hajnoczi [mailto:stefanha@gmail.com]
>Sent: Thursday, September 19, 2013 10:14 PM
>To: 'Mark Trumpold'
>Cc: 'qemu-devel', 'Paul Clements', nbd-general@lists.sourceforge.net, 
>bonzini@stefanha-thinkpad.redhat.com, w@uter.be
>Subject: Re: [Qemu-devel] Hibernate and qemu-nbd
>
>On Thu, Sep 19, 2013 at 10:44 PM, Mark Trumpold <markt@netqa.com> wrote:
>>
>>>-----Original Message-----
>>>From: Stefan Hajnoczi [mailto:stefanha@gmail.com]
>>>Sent: Wednesday, September 18, 2013 06:12 AM
>>>To: 'Mark Trumpold'
>>>Cc: qemu-devel@nongnu.org, 'Paul Clements', nbd-general@lists.sourceforge.net,
>>>bonzini@stefanha-thinkpad.redhat.com, w@uter.be
>>>Subject: Re: [Qemu-devel] Hibernate and qemu-nbd
>>>
>>>On Tue, Sep 17, 2013 at 07:10:44AM -0700, Mark Trumpold wrote:
>>>> I am using the kernel functionality directly with the commands:
>>>>     echo platform >/sys/power/disk
>>>>     echo disk >/sys/power/state
>>>>
>>>> The following appears in dmesg when I attempt to hibernate:
>>>>
>>>> ====================================================
>>>> [   38.881397] nbd (pid 1473: qemu-nbd) got signal 0
>>>> [   38.881401] block nbd0: shutting down socket
>>>> [   38.881404] block nbd0: Receive control failed (result -4)
>>>> [   38.881417] block nbd0: queue cleared
>>>> [   87.463133] block nbd0: Attempted send on closed socket
>>>> [   87.463137] end_request: I/O error, dev nbd0, sector 66824
>>>> ====================================================
>>>>
>>>> My environment:
>>>>   Debian: 6.0.5
>>>>   Kernel: 3.3.1
>>>>   Qemu userspace: 1.2.0
>>>
>>>This could be a bug in the nbd client kernel module.
>>>drivers/block/nbd.c:sock_xmit() does the following:
>>>
>>>            result = kernel_recvmsg(sock, &msg, &iov, 1, size,
>>>                                    msg.msg_flags);
>>>
>>>    if (signal_pending(current)) {
>>>            siginfo_t info;
>>>            printk(KERN_WARNING "nbd (pid %d: %s) got signal %d\n",
>>>                    task_pid_nr(current), current->comm,
>>>                    dequeue_signal_lock(current, &current->blocked, &info));
>>>            result = -EINTR;
>>>            sock_shutdown(nbd, !send);
>>>            break;
>>>    }
>>>
>>>The signal number in the log output looks bogus, we shouldn't get 0.
>>>sock_xmit() actually blocks all signals except SIGKILL before calling
>>>kernel_recvmsg().  I guess this is an artifact of the suspend-to-disk
>>>operation, maybe the signal pending flag is set on the process.
>>>
>>>Perhaps someone with a better understanding of the kernel internals can
>>>check this?
>>>
>>>What happens next is that the nbd kernel module shuts down the NBD connection.
>>>
>>>As a workaround, please try running a separate nbd-client(1) process and drop
>>>the qemu-nbd -c command-line argument.  This way nbd-client(1) uses the
>>>nbd kernel module instead of the qemu-nbd process and you'll get the
>>>benefit of nbd-client's automatic reconnect.
>>>
>>>Stefan
>>>
>>
>> Hi Stefan,
>>
>> Thank you for the information.
>>
>> I did some experiments per you suggestion.  Wasn't sure if the following
>> was what you had in mind:
>>
>> 1) Configured 'nbd-server' and started (/etc/nbd-server/config):
>>   [generic]
>>   [export]
>>     exportname = /root/qemu/q1.img
>>     port = 2000
>
>You can use qemu-nbd instead of nbd-server.  This way you'll be able
>to serve up qcow2 and other image formats.
>
>Just avoid the qemu-nbd -c option.  This makes qemu-nbd purely run the
>NBD network protocol and skips simultaneously running the kernel NBD
>client.  (Since qemu-nbd doesn't reconnect when ioctl(NBD_DO_IT) fails
>with EINTR the workaround is to use nbd-client(1) to drive the kernel
>NBD client instead.)
>
>> 2) Started 'nbd-client':
>>    -> nbd-client localhost 2000 /dev/nbd0
>>
>> 3) Verify '/dev/nbd0' is in use (will appear in list):
>>    -> cat /proc/partitions
>>
>> At this point I could mount '/dev/nbd0' as expected, but not necessary
>> to demonstrate a problem.
>>
>> Now at this point if I enter S1(standby), S3(suspend to ram), or
>> S4(suspend to disk) I get the same dmesg as before indicating
>> 'nbd0' caught signal 0 and exited.
>>
>> When I resume I simply repeat step #3 to verify.
>
>It's expected that you get the same kernel messages.  The difference
>should be that /dev/nbd0 is still accessible after resuming from disk
>because nbd-client automatically reconnects after the nbd kernel
>module bails out with EINTR.
>
>> ==================
>>
>> Also, previously before contacting the group I had modified the same
>> kernel source that you had identified in 'drivers/block/nbd.c:sock_xmit()'
>> to not take any action.  This was strictly for troubleshooting:
>>
>> 199            result = kernel_recvmsg(sock, &msg, &iov, 1, size,
>> 200                                    msg.msg_flags);
>> 201
>> 202    if (signal_pending(current)) {
>> 203            siginfo_t info;
>> 204            printk(KERN_WARNING "nbd (pid %d: %s) got signal %d\n",
>> 205                    task_pid_nr(current), current->comm,
>> 206                    dequeue_signal_lock(current, &current->blocked,&info)); 207
>> 208            //result = -EINTR;
>> 209            //sock_shutdown(nbd, !send);
>> 210            //break;
>> 211    }
>>
>> We then got errors ("Wrong magac ...) in the following section:
>>
>> /* NULL returned = something went wrong, inform userspace */
>> static struct request *nbd_read_stat(struct nbd_device *lo)
>> {
>>         int result;
>>         struct nbd_reply reply;
>>         struct request *req;
>>
>>         reply.magic = 0;
>>         result = sock_xmit(lo, 0, &reply, sizeof(reply), MSG_WAITALL);
>>         if (result <= 0) {
>>                 dev_err(disk_to_dev(lo->disk),
>>                         "Receive control failed (result %d)\n", result);
>>                 goto harderror;
>>         }
>>
>>         if (ntohl(reply.magic) != NBD_REPLY_MAGIC) {
>>                 dev_err(disk_to_dev(lo->disk), "Wrong magic (0x%lx)\n",
>>                                 (unsigned long)ntohl(reply.magic));
>>                 result = -EPROTO;
>>                 goto harderror;
>>
>>
>> So, it seemed to me the call at line #199 above must be returning with
>> error after we commented out the signal action logic.
>
>I'm not familiar enough with the code to say what is happening.  As
>the next step I would print out the kernel_recvmsg() return value when
>the signal is pending and look into what happens during
>suspend-to-disk (there's some sort of process freezing that takes
>place).
>
>Sorry I can't be of more help.  Hopefully someone more familiar with
>the nbd kernel module will have time to chime in.
>
>Stefan
>

Stefan,

So, I tried the following:

  -> qemu-nbd -p 2000 /root/qemu/q1.img &
  -> nbd-client localhost 2000 /dev/nbd0 &

At this point I can mount /dev/nbd0, etc.

  -> echo platform > /sys/power/disk
  -> echo disk >/sys/power/state

At this point we are 'hibernated'.
On power cycle, the OS seems to come back to the state
before hibernation with exception to QEMU:

  nbd.c:nbd_receive_request():L517: read failed  <-- on command line

[78979.269039] Freezing user space processes ...
[78979.269122] nbd (pid 2455: nbd-client) got signal 0
[78979.269127] block nbd0: shutting down socket
[78979.269151] block nbd0: Receive control failed (result -4)
[78979.269165] block nbd0: queue cleared

=============================

Is this the correct test you were thinking?

Thanks for your input!

Regards,
Mark T.

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [Qemu-devel] Hibernate and qemu-nbd
  2013-09-20 18:00 [Qemu-devel] Hibernate and qemu-nbd Mark Trumpold
@ 2013-09-21  9:59 ` Wouter Verhelst
  2013-09-25 14:42   ` Mark Trumpold
  0 siblings, 1 reply; 9+ messages in thread
From: Wouter Verhelst @ 2013-09-21  9:59 UTC (permalink / raw)
  To: Mark Trumpold
  Cc: nbd-general, Stefan Hajnoczi, bonzini, Paul Clements, qemu-devel

On 20-09-13 20:00, Mark Trumpold wrote:
> Stefan,
> 
> So, I tried the following:
> 
>   -> qemu-nbd -p 2000 /root/qemu/q1.img &
>   -> nbd-client localhost 2000 /dev/nbd0 &

That won't work. nbd-client will only try the reconnect thing if you use
the "-persist" option.

Also, nbd-client will do a fork(), so the & isn't necessary.

-- 
This end should point toward the ground if you want to go to space.

If it starts pointing toward space you are having a bad problem and you
will not go to space today.

  -- http://xkcd.com/1133/

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [Qemu-devel] Hibernate and qemu-nbd
  2013-09-21  9:59 ` Wouter Verhelst
@ 2013-09-25 14:42   ` Mark Trumpold
  2013-09-26  7:11     ` Stefan Hajnoczi
  2013-09-26 19:46     ` [Qemu-devel] [Nbd] " Wouter Verhelst
  0 siblings, 2 replies; 9+ messages in thread
From: Mark Trumpold @ 2013-09-25 14:42 UTC (permalink / raw)
  To: Wouter Verhelst
  Cc: nbd-general, Stefan Hajnoczi, bonzini, Paul Clements,
	qemu-devel@nongnu.org

Hello Wouter,

Thank you for your input.

I replayed the test as follows:

  -> qemu-nbd -p 2000 -persist /root/qemu/q1.img &
  -> nbd-client localhost 2000 /dev/nbd0
  -> echo reboot >/sys/power/disk
  -> echo disk >/sys/power/state

The "reboot" is a handy way to test, as it goes through the
complete hibernate cycle and returns to the prompt.

In this case the client DID try to reconnect as you suggest,
however the 'qemu-nbd' server side had exited, so no go.

Regards,
Mark T.


On 9/21/13 2:59 AM, "Wouter Verhelst" <w@uter.be> wrote:

>On 20-09-13 20:00, Mark Trumpold wrote:
>> Stefan,
>> 
>> So, I tried the following:
>> 
>>   -> qemu-nbd -p 2000 /root/qemu/q1.img &
>>   -> nbd-client localhost 2000 /dev/nbd0 &
>
>That won't work. nbd-client will only try the reconnect thing if you use
>the "-persist" option.
>
>Also, nbd-client will do a fork(), so the & isn't necessary.
>
>-- 
>This end should point toward the ground if you want to go to space.
>
>If it starts pointing toward space you are having a bad problem and you
>will not go to space today.
>
>  -- http://xkcd.com/1133/
>

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [Qemu-devel] Hibernate and qemu-nbd
  2013-09-25 14:42   ` Mark Trumpold
@ 2013-09-26  7:11     ` Stefan Hajnoczi
  2013-09-26 19:46     ` [Qemu-devel] [Nbd] " Wouter Verhelst
  1 sibling, 0 replies; 9+ messages in thread
From: Stefan Hajnoczi @ 2013-09-26  7:11 UTC (permalink / raw)
  To: Mark Trumpold
  Cc: nbd-general, Wouter Verhelst, bonzini, Paul Clements,
	qemu-devel@nongnu.org

On Wed, Sep 25, 2013 at 07:42:40AM -0700, Mark Trumpold wrote:
> I replayed the test as follows:
> 
>   -> qemu-nbd -p 2000 -persist /root/qemu/q1.img &

Did you mean --persistent?

Any idea what terminated the qemu-nbd process?

Stefan

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [Qemu-devel] [Nbd]  Hibernate and qemu-nbd
  2013-09-25 14:42   ` Mark Trumpold
  2013-09-26  7:11     ` Stefan Hajnoczi
@ 2013-09-26 19:46     ` Wouter Verhelst
  1 sibling, 0 replies; 9+ messages in thread
From: Wouter Verhelst @ 2013-09-26 19:46 UTC (permalink / raw)
  To: Mark Trumpold
  Cc: nbd-general, Stefan Hajnoczi, bonzini, Paul Clements,
	qemu-devel@nongnu.org

On 25-09-13 16:42, Mark Trumpold wrote:
> Hello Wouter,
> 
> Thank you for your input.
> 
> I replayed the test as follows:
> 
>   -> qemu-nbd -p 2000 -persist /root/qemu/q1.img &
>   -> nbd-client localhost 2000 /dev/nbd0

No.

nbd-client -persist localhost 2000 /dev/nbd0

-- 
This end should point toward the ground if you want to go to space.

If it starts pointing toward space you are having a bad problem and you
will not go to space today.

  -- http://xkcd.com/1133/

^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2013-09-26 19:46 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2013-09-20 18:00 [Qemu-devel] Hibernate and qemu-nbd Mark Trumpold
2013-09-21  9:59 ` Wouter Verhelst
2013-09-25 14:42   ` Mark Trumpold
2013-09-26  7:11     ` Stefan Hajnoczi
2013-09-26 19:46     ` [Qemu-devel] [Nbd] " Wouter Verhelst
  -- strict thread matches above, loose matches on Subject: below --
2013-09-19 20:44 [Qemu-devel] " Mark Trumpold
2013-09-20  5:14 ` Stefan Hajnoczi
2013-09-17 14:10 Mark Trumpold
2013-09-18 13:12 ` Stefan Hajnoczi

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).