qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
* [Qemu-devel] [PATCH 00/11] sheepdog: reconnect server after connection failure
@ 2013-07-23  8:30 MORITA Kazutaka
  2013-07-23  8:30 ` [Qemu-devel] [PATCH 01/11] ignore SIGPIPE in qemu-img and qemu-io MORITA Kazutaka
                   ` (10 more replies)
  0 siblings, 11 replies; 20+ messages in thread
From: MORITA Kazutaka @ 2013-07-23  8:30 UTC (permalink / raw)
  To: Kevin Wolf, Stefan Hajnoczi, qemu-devel; +Cc: sheepdog

Currently, if a sheepdog server exits, all the connecting VMs need to
be restarted.  This series implements a feature to reconnect the
server, and enables us to do online sheepdog upgrade and avoid
restarting VMs when sheepdog servers crash unexpectedly.

MORITA Kazutaka (11):
  ignore SIGPIPE in qemu-img and qemu-io
  iov: handle eof in iov_send_recv
  qemu-sockets: make wait_for_connect be invoked in qemu_aio_wait
  sheepdog: make connect nonblocking
  sheepdog: check return values of qemu_co_recv/send correctly
  sheepdog: handle vdi objects in resend_aio_req
  sheepdog: reload inode outside of resend_aioreq
  coroutine: add co_aio_sleep_ns() to allow sleep in block drivers
  sheepdog: try to reconnect to sheepdog after network error
  sheepdog: make add_aio_request and send_aioreq void functions
  sheepdog: cancel aio requests if possible

 Makefile                  |   4 +-
 block/sheepdog.c          | 314 ++++++++++++++++++++++++++++++++--------------
 include/block/coroutine.h |   8 ++
 qemu-coroutine-sleep.c    |  47 +++++++
 qemu-img.c                |   4 +
 qemu-io.c                 |   4 +
 util/iov.c                |   6 +
 util/qemu-sockets.c       |  15 ++-
 8 files changed, 303 insertions(+), 99 deletions(-)

-- 
1.8.1.3.566.gaa39828

^ permalink raw reply	[flat|nested] 20+ messages in thread

* [Qemu-devel] [PATCH 01/11] ignore SIGPIPE in qemu-img and qemu-io
  2013-07-23  8:30 [Qemu-devel] [PATCH 00/11] sheepdog: reconnect server after connection failure MORITA Kazutaka
@ 2013-07-23  8:30 ` MORITA Kazutaka
  2013-07-23  9:19   ` [Qemu-devel] [PATCH for-1.6 " Paolo Bonzini
  2013-07-23  8:30 ` [Qemu-devel] [PATCH 02/11] iov: handle EOF in iov_send_recv MORITA Kazutaka
                   ` (9 subsequent siblings)
  10 siblings, 1 reply; 20+ messages in thread
From: MORITA Kazutaka @ 2013-07-23  8:30 UTC (permalink / raw)
  To: Kevin Wolf, Stefan Hajnoczi, qemu-devel; +Cc: sheepdog

This prevents the tools from being stopped when they write data to a
closed connection in the other side.

Signed-off-by: MORITA Kazutaka <morita.kazutaka@lab.ntt.co.jp>
---
 qemu-img.c | 4 ++++
 qemu-io.c  | 4 ++++
 2 files changed, 8 insertions(+)

diff --git a/qemu-img.c b/qemu-img.c
index c55ca5c..919d464 100644
--- a/qemu-img.c
+++ b/qemu-img.c
@@ -2319,6 +2319,10 @@ int main(int argc, char **argv)
     const img_cmd_t *cmd;
     const char *cmdname;
 
+#ifdef CONFIG_POSIX
+    signal(SIGPIPE, SIG_IGN);
+#endif
+
     error_set_progname(argv[0]);
 
     qemu_init_main_loop();
diff --git a/qemu-io.c b/qemu-io.c
index cb9def5..d54dc86 100644
--- a/qemu-io.c
+++ b/qemu-io.c
@@ -335,6 +335,10 @@ int main(int argc, char **argv)
     int opt_index = 0;
     int flags = BDRV_O_UNMAP;
 
+#ifdef CONFIG_POSIX
+    signal(SIGPIPE, SIG_IGN);
+#endif
+
     progname = basename(argv[0]);
 
     while ((c = getopt_long(argc, argv, sopt, lopt, &opt_index)) != -1) {
-- 
1.8.1.3.566.gaa39828

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [Qemu-devel] [PATCH 02/11] iov: handle EOF in iov_send_recv
  2013-07-23  8:30 [Qemu-devel] [PATCH 00/11] sheepdog: reconnect server after connection failure MORITA Kazutaka
  2013-07-23  8:30 ` [Qemu-devel] [PATCH 01/11] ignore SIGPIPE in qemu-img and qemu-io MORITA Kazutaka
@ 2013-07-23  8:30 ` MORITA Kazutaka
  2013-07-23 11:28   ` Paolo Bonzini
  2013-07-23  8:30 ` [Qemu-devel] [PATCH 03/11] qemu-sockets: make wait_for_connect be invoked in qemu_aio_wait MORITA Kazutaka
                   ` (8 subsequent siblings)
  10 siblings, 1 reply; 20+ messages in thread
From: MORITA Kazutaka @ 2013-07-23  8:30 UTC (permalink / raw)
  To: Kevin Wolf, Stefan Hajnoczi, qemu-devel; +Cc: sheepdog

Without this patch, iov_send_recv() never returns when do_send_recv()
returns zero.

Signed-off-by: MORITA Kazutaka <morita.kazutaka@lab.ntt.co.jp>
---
 util/iov.c | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/util/iov.c b/util/iov.c
index cc6e837..f705586 100644
--- a/util/iov.c
+++ b/util/iov.c
@@ -202,6 +202,12 @@ ssize_t iov_send_recv(int sockfd, struct iovec *iov, unsigned iov_cnt,
             return -1;
         }
 
+        if (ret == 0 && !do_send) {
+            /* recv returns 0 when the peer has performed an orderly
+             * shutdown. */
+            break;
+        }
+
         /* Prepare for the next iteration */
         offset += ret;
         total += ret;
-- 
1.8.1.3.566.gaa39828

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [Qemu-devel] [PATCH 03/11] qemu-sockets: make wait_for_connect be invoked in qemu_aio_wait
  2013-07-23  8:30 [Qemu-devel] [PATCH 00/11] sheepdog: reconnect server after connection failure MORITA Kazutaka
  2013-07-23  8:30 ` [Qemu-devel] [PATCH 01/11] ignore SIGPIPE in qemu-img and qemu-io MORITA Kazutaka
  2013-07-23  8:30 ` [Qemu-devel] [PATCH 02/11] iov: handle EOF in iov_send_recv MORITA Kazutaka
@ 2013-07-23  8:30 ` MORITA Kazutaka
  2013-07-23 11:36   ` Paolo Bonzini
  2013-07-23  8:30 ` [Qemu-devel] [PATCH 04/11] sheepdog: make connect nonblocking MORITA Kazutaka
                   ` (7 subsequent siblings)
  10 siblings, 1 reply; 20+ messages in thread
From: MORITA Kazutaka @ 2013-07-23  8:30 UTC (permalink / raw)
  To: Kevin Wolf, Stefan Hajnoczi, qemu-devel; +Cc: sheepdog

This allows us to use inet_nonblocking_connect() and
unix_nonblocking_connect() in block drivers.

qemu-ga needs to link block-obj to resolve dependencies of
qemu_aio_set_fd_handler().

Signed-off-by: MORITA Kazutaka <morita.kazutaka@lab.ntt.co.jp>
---
 Makefile            |  4 ++--
 util/qemu-sockets.c | 15 ++++++++++-----
 2 files changed, 12 insertions(+), 7 deletions(-)

diff --git a/Makefile b/Makefile
index c06bfab..5fe2e0f 100644
--- a/Makefile
+++ b/Makefile
@@ -197,7 +197,7 @@ fsdev/virtfs-proxy-helper$(EXESUF): LIBS += -lcap
 qemu-img-cmds.h: $(SRC_PATH)/qemu-img-cmds.hx
 	$(call quiet-command,sh $(SRC_PATH)/scripts/hxtool -h < $< > $@,"  GEN   $@")
 
-qemu-ga$(EXESUF): LIBS = $(LIBS_QGA)
+qemu-ga$(EXESUF): LIBS = $(LIBS_QGA) $(LIBS_TOOLS)
 qemu-ga$(EXESUF): QEMU_CFLAGS += -I qga/qapi-generated
 
 gen-out-type = $(subst .,-,$(suffix $@))
@@ -227,7 +227,7 @@ $(SRC_PATH)/qapi-schema.json $(SRC_PATH)/scripts/qapi-commands.py $(qapi-py)
 QGALIB_GEN=$(addprefix qga/qapi-generated/, qga-qapi-types.h qga-qapi-visit.h qga-qmp-commands.h)
 $(qga-obj-y) qemu-ga.o: $(QGALIB_GEN)
 
-qemu-ga$(EXESUF): $(qga-obj-y) libqemuutil.a libqemustub.a
+qemu-ga$(EXESUF): $(qga-obj-y) $(block-obj-y) libqemuutil.a libqemustub.a
 	$(call LINK, $^)
 
 clean:
diff --git a/util/qemu-sockets.c b/util/qemu-sockets.c
index 095716e..8b21fd1 100644
--- a/util/qemu-sockets.c
+++ b/util/qemu-sockets.c
@@ -218,6 +218,11 @@ typedef struct ConnectState {
 static int inet_connect_addr(struct addrinfo *addr, bool *in_progress,
                              ConnectState *connect_state, Error **errp);
 
+static int return_true(void *opaque)
+{
+    return 1;
+}
+
 static void wait_for_connect(void *opaque)
 {
     ConnectState *s = opaque;
@@ -225,7 +230,7 @@ static void wait_for_connect(void *opaque)
     socklen_t valsize = sizeof(val);
     bool in_progress;
 
-    qemu_set_fd_handler2(s->fd, NULL, NULL, NULL, NULL);
+    qemu_aio_set_fd_handler(s->fd, NULL, NULL, NULL, NULL);
 
     do {
         rc = qemu_getsockopt(s->fd, SOL_SOCKET, SO_ERROR, &val, &valsize);
@@ -288,8 +293,8 @@ static int inet_connect_addr(struct addrinfo *addr, bool *in_progress,
 
     if (connect_state != NULL && QEMU_SOCKET_RC_INPROGRESS(rc)) {
         connect_state->fd = sock;
-        qemu_set_fd_handler2(sock, NULL, NULL, wait_for_connect,
-                             connect_state);
+        qemu_aio_set_fd_handler(sock, NULL, wait_for_connect, return_true,
+                                connect_state);
         *in_progress = true;
     } else if (rc < 0) {
         error_set_errno(errp, errno, QERR_SOCKET_CONNECT_FAILED);
@@ -749,8 +754,8 @@ int unix_connect_opts(QemuOpts *opts, Error **errp,
 
     if (connect_state != NULL && QEMU_SOCKET_RC_INPROGRESS(rc)) {
         connect_state->fd = sock;
-        qemu_set_fd_handler2(sock, NULL, NULL, wait_for_connect,
-                             connect_state);
+        qemu_aio_set_fd_handler(sock, NULL, wait_for_connect, return_true,
+                                connect_state);
         return sock;
     } else if (rc >= 0) {
         /* non blocking socket immediate success, call callback */
-- 
1.8.1.3.566.gaa39828

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [Qemu-devel] [PATCH 04/11] sheepdog: make connect nonblocking
  2013-07-23  8:30 [Qemu-devel] [PATCH 00/11] sheepdog: reconnect server after connection failure MORITA Kazutaka
                   ` (2 preceding siblings ...)
  2013-07-23  8:30 ` [Qemu-devel] [PATCH 03/11] qemu-sockets: make wait_for_connect be invoked in qemu_aio_wait MORITA Kazutaka
@ 2013-07-23  8:30 ` MORITA Kazutaka
  2013-07-23  8:30 ` [Qemu-devel] [PATCH 05/11] sheepdog: check return values of qemu_co_recv/send correctly MORITA Kazutaka
                   ` (6 subsequent siblings)
  10 siblings, 0 replies; 20+ messages in thread
From: MORITA Kazutaka @ 2013-07-23  8:30 UTC (permalink / raw)
  To: Kevin Wolf, Stefan Hajnoczi, qemu-devel; +Cc: sheepdog

This uses nonblocking connect functions to connect to the sheepdog
server.  The connect operation is done in a coroutine function and it
will be yielded until the created socked is ready for IO.

Signed-off-by: MORITA Kazutaka <morita.kazutaka@lab.ntt.co.jp>
---
 block/sheepdog.c | 70 ++++++++++++++++++++++++++++++++++++++++++++++++++------
 1 file changed, 63 insertions(+), 7 deletions(-)

diff --git a/block/sheepdog.c b/block/sheepdog.c
index 6a41ad9..6f5ede4 100644
--- a/block/sheepdog.c
+++ b/block/sheepdog.c
@@ -455,18 +455,51 @@ static SheepdogAIOCB *sd_aio_setup(BlockDriverState *bs, QEMUIOVector *qiov,
     return acb;
 }
 
-static int connect_to_sdog(BDRVSheepdogState *s)
-{
+typedef struct SheepdogConnectCo {
+    BDRVSheepdogState *bs;
+    Coroutine *co;
     int fd;
+    bool finished;
+} SheepdogConnectCo;
+
+static void sd_connect_completed(int fd, void *opaque)
+{
+    SheepdogConnectCo *scco = opaque;
+
+    if (fd < 0) {
+        int val, rc;
+        socklen_t valsize = sizeof(val);
+
+        do {
+            rc = qemu_getsockopt(scco->fd, SOL_SOCKET, SO_ERROR, &val,
+                                 &valsize);
+        } while (rc == -1 && socket_error() == EINTR);
+
+        scco->fd = rc < 0 ? -errno : -val;
+    }
+
+    scco->finished = true;
+
+    if (scco->co != NULL) {
+        qemu_coroutine_enter(scco->co, NULL);
+    }
+}
+
+static coroutine_fn void co_connect_to_sdog(void *opaque)
+{
+    SheepdogConnectCo *scco = opaque;
+    BDRVSheepdogState *s = scco->bs;
     Error *err = NULL;
 
     if (s->is_unix) {
-        fd = unix_connect(s->host_spec, &err);
+        scco->fd = unix_nonblocking_connect(s->host_spec, sd_connect_completed,
+                                            opaque, &err);
     } else {
-        fd = inet_connect(s->host_spec, &err);
+        scco->fd = inet_nonblocking_connect(s->host_spec, sd_connect_completed,
+                                            opaque, &err);
 
         if (err == NULL) {
-            int ret = socket_set_nodelay(fd);
+            int ret = socket_set_nodelay(scco->fd);
             if (ret < 0) {
                 error_report("%s", strerror(errno));
             }
@@ -476,11 +509,34 @@ static int connect_to_sdog(BDRVSheepdogState *s)
     if (err != NULL) {
         qerror_report_err(err);
         error_free(err);
+    }
+
+    if (!scco->finished) {
+        /* wait for connect to finish */
+        scco->co = qemu_coroutine_self();
+        qemu_coroutine_yield();
+    }
+}
+
+static int connect_to_sdog(BDRVSheepdogState *s)
+{
+    Coroutine *co;
+    SheepdogConnectCo scco = {
+        .bs = s,
+        .finished = false,
+    };
+
+    if (qemu_in_coroutine()) {
+        co_connect_to_sdog(&scco);
     } else {
-        qemu_set_nonblock(fd);
+        co = qemu_coroutine_create(co_connect_to_sdog);
+        qemu_coroutine_enter(co, &scco);
+        while (!scco.finished) {
+            qemu_aio_wait();
+        }
     }
 
-    return fd;
+    return scco.fd;
 }
 
 static coroutine_fn int send_co_req(int sockfd, SheepdogReq *hdr, void *data,
-- 
1.8.1.3.566.gaa39828

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [Qemu-devel] [PATCH 05/11] sheepdog: check return values of qemu_co_recv/send correctly
  2013-07-23  8:30 [Qemu-devel] [PATCH 00/11] sheepdog: reconnect server after connection failure MORITA Kazutaka
                   ` (3 preceding siblings ...)
  2013-07-23  8:30 ` [Qemu-devel] [PATCH 04/11] sheepdog: make connect nonblocking MORITA Kazutaka
@ 2013-07-23  8:30 ` MORITA Kazutaka
  2013-07-23  8:30 ` [Qemu-devel] [PATCH 06/11] sheepdog: handle vdi objects in resend_aio_req MORITA Kazutaka
                   ` (5 subsequent siblings)
  10 siblings, 0 replies; 20+ messages in thread
From: MORITA Kazutaka @ 2013-07-23  8:30 UTC (permalink / raw)
  To: Kevin Wolf, Stefan Hajnoczi, qemu-devel; +Cc: sheepdog

qemu_co_recv/send return shorter length on error.

Signed-off-by: MORITA Kazutaka <morita.kazutaka@lab.ntt.co.jp>
---
 block/sheepdog.c | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/block/sheepdog.c b/block/sheepdog.c
index 6f5ede4..567f52e 100644
--- a/block/sheepdog.c
+++ b/block/sheepdog.c
@@ -727,7 +727,7 @@ static void coroutine_fn aio_read_response(void *opaque)
 
     /* read a header */
     ret = qemu_co_recv(fd, &rsp, sizeof(rsp));
-    if (ret < 0) {
+    if (ret < sizeof(rsp)) {
         error_report("failed to get the header, %s", strerror(errno));
         goto out;
     }
@@ -778,7 +778,7 @@ static void coroutine_fn aio_read_response(void *opaque)
     case AIOCB_READ_UDATA:
         ret = qemu_co_recvv(fd, acb->qiov->iov, acb->qiov->niov,
                             aio_req->iov_offset, rsp.data_length);
-        if (ret < 0) {
+        if (ret < rsp.data_length) {
             error_report("failed to get the data, %s", strerror(errno));
             goto out;
         }
@@ -1131,7 +1131,7 @@ static int coroutine_fn add_aio_request(BDRVSheepdogState *s, AIOReq *aio_req,
 
     /* send a header */
     ret = qemu_co_send(s->fd, &hdr, sizeof(hdr));
-    if (ret < 0) {
+    if (ret < sizeof(hdr)) {
         qemu_co_mutex_unlock(&s->lock);
         error_report("failed to send a req, %s", strerror(errno));
         return -errno;
@@ -1139,7 +1139,7 @@ static int coroutine_fn add_aio_request(BDRVSheepdogState *s, AIOReq *aio_req,
 
     if (wlen) {
         ret = qemu_co_sendv(s->fd, iov, niov, aio_req->iov_offset, wlen);
-        if (ret < 0) {
+        if (ret < wlen) {
             qemu_co_mutex_unlock(&s->lock);
             error_report("failed to send a data, %s", strerror(errno));
             return -errno;
-- 
1.8.1.3.566.gaa39828

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [Qemu-devel] [PATCH 06/11] sheepdog: handle vdi objects in resend_aio_req
  2013-07-23  8:30 [Qemu-devel] [PATCH 00/11] sheepdog: reconnect server after connection failure MORITA Kazutaka
                   ` (4 preceding siblings ...)
  2013-07-23  8:30 ` [Qemu-devel] [PATCH 05/11] sheepdog: check return values of qemu_co_recv/send correctly MORITA Kazutaka
@ 2013-07-23  8:30 ` MORITA Kazutaka
  2013-07-23  8:30 ` [Qemu-devel] [PATCH 07/11] sheepdog: reload inode outside of resend_aioreq MORITA Kazutaka
                   ` (4 subsequent siblings)
  10 siblings, 0 replies; 20+ messages in thread
From: MORITA Kazutaka @ 2013-07-23  8:30 UTC (permalink / raw)
  To: Kevin Wolf, Stefan Hajnoczi, qemu-devel; +Cc: sheepdog

The current resend_aio_req() doesn't work when the request is against
vdi objects.  This fixes the problem.

Signed-off-by: MORITA Kazutaka <morita.kazutaka@lab.ntt.co.jp>
---
 block/sheepdog.c | 21 ++++++++++++++++-----
 1 file changed, 16 insertions(+), 5 deletions(-)

diff --git a/block/sheepdog.c b/block/sheepdog.c
index 567f52e..018eab2 100644
--- a/block/sheepdog.c
+++ b/block/sheepdog.c
@@ -1265,11 +1265,15 @@ static int coroutine_fn resend_aioreq(BDRVSheepdogState *s, AIOReq *aio_req)
         return ret;
     }
 
-    aio_req->oid = vid_to_data_oid(s->inode.vdi_id,
-                                   data_oid_to_idx(aio_req->oid));
+    if (is_data_obj(aio_req->oid)) {
+        aio_req->oid = vid_to_data_oid(s->inode.vdi_id,
+                                       data_oid_to_idx(aio_req->oid));
+    } else {
+        aio_req->oid = vid_to_vdi_oid(s->inode.vdi_id);
+    }
 
     /* check whether this request becomes a CoW one */
-    if (acb->aiocb_type == AIOCB_WRITE_UDATA) {
+    if (acb->aiocb_type == AIOCB_WRITE_UDATA && is_data_obj(aio_req->oid)) {
         int idx = data_oid_to_idx(aio_req->oid);
         AIOReq *areq;
 
@@ -1297,8 +1301,15 @@ static int coroutine_fn resend_aioreq(BDRVSheepdogState *s, AIOReq *aio_req)
         create = true;
     }
 out:
-    return add_aio_request(s, aio_req, acb->qiov->iov, acb->qiov->niov,
-                           create, acb->aiocb_type);
+    if (is_data_obj(aio_req->oid)) {
+        return add_aio_request(s, aio_req, acb->qiov->iov, acb->qiov->niov,
+                               create, acb->aiocb_type);
+    } else {
+        struct iovec iov;
+        iov.iov_base = &s->inode;
+        iov.iov_len = sizeof(s->inode);
+        return add_aio_request(s, aio_req, &iov, 1, false, AIOCB_WRITE_UDATA);
+    }
 }
 
 /* TODO Convert to fine grained options */
-- 
1.8.1.3.566.gaa39828

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [Qemu-devel] [PATCH 07/11] sheepdog: reload inode outside of resend_aioreq
  2013-07-23  8:30 [Qemu-devel] [PATCH 00/11] sheepdog: reconnect server after connection failure MORITA Kazutaka
                   ` (5 preceding siblings ...)
  2013-07-23  8:30 ` [Qemu-devel] [PATCH 06/11] sheepdog: handle vdi objects in resend_aio_req MORITA Kazutaka
@ 2013-07-23  8:30 ` MORITA Kazutaka
  2013-07-23  8:30 ` [Qemu-devel] [PATCH 08/11] coroutine: add co_aio_sleep_ns() to allow sleep in block drivers MORITA Kazutaka
                   ` (3 subsequent siblings)
  10 siblings, 0 replies; 20+ messages in thread
From: MORITA Kazutaka @ 2013-07-23  8:30 UTC (permalink / raw)
  To: Kevin Wolf, Stefan Hajnoczi, qemu-devel; +Cc: sheepdog

This prepares for using resend_aioreq() after reconnecting to the
sheepdog server.

Signed-off-by: MORITA Kazutaka <morita.kazutaka@lab.ntt.co.jp>
---
 block/sheepdog.c | 33 +++++++++++++++++++--------------
 1 file changed, 19 insertions(+), 14 deletions(-)

diff --git a/block/sheepdog.c b/block/sheepdog.c
index 018eab2..1173605 100644
--- a/block/sheepdog.c
+++ b/block/sheepdog.c
@@ -222,6 +222,11 @@ static inline uint64_t data_oid_to_idx(uint64_t oid)
     return oid & (MAX_DATA_OBJS - 1);
 }
 
+static inline uint32_t oid_to_vid(uint64_t oid)
+{
+    return (oid & ~VDI_BIT) >> VDI_SPACE_SHIFT;
+}
+
 static inline uint64_t vid_to_vdi_oid(uint32_t vid)
 {
     return VDI_BIT | ((uint64_t)vid << VDI_SPACE_SHIFT);
@@ -663,7 +668,7 @@ static int coroutine_fn add_aio_request(BDRVSheepdogState *s, AIOReq *aio_req,
                            struct iovec *iov, int niov, bool create,
                            enum AIOCBState aiocb_type);
 static int coroutine_fn resend_aioreq(BDRVSheepdogState *s, AIOReq *aio_req);
-
+static int reload_inode(BDRVSheepdogState *s, uint32_t snapid, const char *tag);
 
 static AIOReq *find_pending_req(BDRVSheepdogState *s, uint64_t oid)
 {
@@ -811,6 +816,19 @@ static void coroutine_fn aio_read_response(void *opaque)
     case SD_RES_SUCCESS:
         break;
     case SD_RES_READONLY:
+        if (s->inode.vdi_id == oid_to_vid(aio_req->oid)) {
+            ret = reload_inode(s, 0, "");
+            if (ret < 0) {
+                goto out;
+            }
+        }
+
+        if (is_data_obj(aio_req->oid)) {
+            aio_req->oid = vid_to_data_oid(s->inode.vdi_id,
+                                           data_oid_to_idx(aio_req->oid));
+        } else {
+            aio_req->oid = vid_to_vdi_oid(s->inode.vdi_id);
+        }
         ret = resend_aioreq(s, aio_req);
         if (ret == SD_RES_SUCCESS) {
             goto out;
@@ -1258,19 +1276,6 @@ static int coroutine_fn resend_aioreq(BDRVSheepdogState *s, AIOReq *aio_req)
 {
     SheepdogAIOCB *acb = aio_req->aiocb;
     bool create = false;
-    int ret;
-
-    ret = reload_inode(s, 0, "");
-    if (ret < 0) {
-        return ret;
-    }
-
-    if (is_data_obj(aio_req->oid)) {
-        aio_req->oid = vid_to_data_oid(s->inode.vdi_id,
-                                       data_oid_to_idx(aio_req->oid));
-    } else {
-        aio_req->oid = vid_to_vdi_oid(s->inode.vdi_id);
-    }
 
     /* check whether this request becomes a CoW one */
     if (acb->aiocb_type == AIOCB_WRITE_UDATA && is_data_obj(aio_req->oid)) {
-- 
1.8.1.3.566.gaa39828

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [Qemu-devel] [PATCH 08/11] coroutine: add co_aio_sleep_ns() to allow sleep in block drivers
  2013-07-23  8:30 [Qemu-devel] [PATCH 00/11] sheepdog: reconnect server after connection failure MORITA Kazutaka
                   ` (6 preceding siblings ...)
  2013-07-23  8:30 ` [Qemu-devel] [PATCH 07/11] sheepdog: reload inode outside of resend_aioreq MORITA Kazutaka
@ 2013-07-23  8:30 ` MORITA Kazutaka
  2013-07-23  8:30 ` [Qemu-devel] [PATCH 09/11] sheepdog: try to reconnect to sheepdog after network error MORITA Kazutaka
                   ` (2 subsequent siblings)
  10 siblings, 0 replies; 20+ messages in thread
From: MORITA Kazutaka @ 2013-07-23  8:30 UTC (permalink / raw)
  To: Kevin Wolf, Stefan Hajnoczi, qemu-devel; +Cc: sheepdog

This helper function behaves similarly to co_sleep_ns(), but the
sleeping coroutine will be resumed when using qemu_aio_wait().

Signed-off-by: MORITA Kazutaka <morita.kazutaka@lab.ntt.co.jp>
---
 include/block/coroutine.h |  8 ++++++++
 qemu-coroutine-sleep.c    | 47 +++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 55 insertions(+)

diff --git a/include/block/coroutine.h b/include/block/coroutine.h
index 377805a..23ea6e9 100644
--- a/include/block/coroutine.h
+++ b/include/block/coroutine.h
@@ -210,6 +210,14 @@ void qemu_co_rwlock_unlock(CoRwlock *lock);
 void coroutine_fn co_sleep_ns(QEMUClock *clock, int64_t ns);
 
 /**
+ * Yield the coroutine for a given duration
+ *
+ * Behaves similarly to co_sleep_ns(), but the sleeping coroutine will be
+ * resumed when using qemu_aio_wait().
+ */
+void coroutine_fn co_aio_sleep_ns(int64_t ns);
+
+/**
  * Yield until a file descriptor becomes readable
  *
  * Note that this function clobbers the handlers for the file descriptor.
diff --git a/qemu-coroutine-sleep.c b/qemu-coroutine-sleep.c
index 169ce5c..3955347 100644
--- a/qemu-coroutine-sleep.c
+++ b/qemu-coroutine-sleep.c
@@ -13,6 +13,7 @@
 
 #include "block/coroutine.h"
 #include "qemu/timer.h"
+#include "qemu/thread.h"
 
 typedef struct CoSleepCB {
     QEMUTimer *ts;
@@ -37,3 +38,49 @@ void coroutine_fn co_sleep_ns(QEMUClock *clock, int64_t ns)
     qemu_del_timer(sleep_cb.ts);
     qemu_free_timer(sleep_cb.ts);
 }
+
+typedef struct CoAioSleepCB {
+    QEMUBH *bh;
+    int64_t ns;
+    Coroutine *co;
+} CoAioSleepCB;
+
+static void co_aio_sleep_cb(void *opaque)
+{
+    CoAioSleepCB *aio_sleep_cb = opaque;
+
+    qemu_coroutine_enter(aio_sleep_cb->co, NULL);
+}
+
+static void *sleep_thread(void *opaque)
+{
+    CoAioSleepCB *aio_sleep_cb = opaque;
+    struct timespec req = {
+        .tv_sec = aio_sleep_cb->ns / 1000000000,
+        .tv_nsec = aio_sleep_cb->ns % 1000000000,
+    };
+    struct timespec rem;
+
+    while (nanosleep(&req, &rem) < 0 && errno == EINTR) {
+        req = rem;
+    }
+
+    qemu_bh_schedule(aio_sleep_cb->bh);
+
+    return NULL;
+}
+
+void coroutine_fn co_aio_sleep_ns(int64_t ns)
+{
+    CoAioSleepCB aio_sleep_cb = {
+        .ns = ns,
+        .co = qemu_coroutine_self(),
+    };
+    QemuThread thread;
+
+    aio_sleep_cb.bh = qemu_bh_new(co_aio_sleep_cb, &aio_sleep_cb);
+    qemu_thread_create(&thread, sleep_thread, &aio_sleep_cb,
+                       QEMU_THREAD_DETACHED);
+    qemu_coroutine_yield();
+    qemu_bh_delete(aio_sleep_cb.bh);
+}
-- 
1.8.1.3.566.gaa39828

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [Qemu-devel] [PATCH 09/11] sheepdog: try to reconnect to sheepdog after network error
  2013-07-23  8:30 [Qemu-devel] [PATCH 00/11] sheepdog: reconnect server after connection failure MORITA Kazutaka
                   ` (7 preceding siblings ...)
  2013-07-23  8:30 ` [Qemu-devel] [PATCH 08/11] coroutine: add co_aio_sleep_ns() to allow sleep in block drivers MORITA Kazutaka
@ 2013-07-23  8:30 ` MORITA Kazutaka
  2013-07-23  8:30 ` [Qemu-devel] [PATCH 10/11] sheepdog: make add_aio_request and send_aioreq void functions MORITA Kazutaka
  2013-07-23  8:30 ` [Qemu-devel] [PATCH 11/11] sheepdog: cancel aio requests if possible MORITA Kazutaka
  10 siblings, 0 replies; 20+ messages in thread
From: MORITA Kazutaka @ 2013-07-23  8:30 UTC (permalink / raw)
  To: Kevin Wolf, Stefan Hajnoczi, qemu-devel; +Cc: sheepdog

This introduces a failed request queue and links all the inflight
requests to the list after network error happens.  After QEMU
reconnects to the sheepdog server successfully, the sheepdog block
driver will retry all the requests in the failed queue.

Signed-off-by: MORITA Kazutaka <morita.kazutaka@lab.ntt.co.jp>
---
 block/sheepdog.c | 72 ++++++++++++++++++++++++++++++++++++++++++++------------
 1 file changed, 57 insertions(+), 15 deletions(-)

diff --git a/block/sheepdog.c b/block/sheepdog.c
index 1173605..cd72927 100644
--- a/block/sheepdog.c
+++ b/block/sheepdog.c
@@ -318,8 +318,11 @@ typedef struct BDRVSheepdogState {
     Coroutine *co_recv;
 
     uint32_t aioreq_seq_num;
+
+    /* Every aio request must be linked to either of these queues. */
     QLIST_HEAD(inflight_aio_head, AIOReq) inflight_aio_head;
     QLIST_HEAD(pending_aio_head, AIOReq) pending_aio_head;
+    QLIST_HEAD(failed_aio_head, AIOReq) failed_aio_head;
 } BDRVSheepdogState;
 
 static const char * sd_strerror(int err)
@@ -669,6 +672,8 @@ static int coroutine_fn add_aio_request(BDRVSheepdogState *s, AIOReq *aio_req,
                            enum AIOCBState aiocb_type);
 static int coroutine_fn resend_aioreq(BDRVSheepdogState *s, AIOReq *aio_req);
 static int reload_inode(BDRVSheepdogState *s, uint32_t snapid, const char *tag);
+static int get_sheep_fd(BDRVSheepdogState *s);
+static void co_write_request(void *opaque);
 
 static AIOReq *find_pending_req(BDRVSheepdogState *s, uint64_t oid)
 {
@@ -710,6 +715,44 @@ static void coroutine_fn send_pending_req(BDRVSheepdogState *s, uint64_t oid)
     }
 }
 
+static coroutine_fn void reconnect_to_sdog(void *opaque)
+{
+    BDRVSheepdogState *s = opaque;
+    AIOReq *aio_req, *next;
+
+    qemu_aio_set_fd_handler(s->fd, NULL, NULL, NULL, NULL);
+    close(s->fd);
+    s->fd = -1;
+
+    /* Wait for outstanding write requests to be completed. */
+    while (s->co_send != NULL) {
+        co_write_request(opaque);
+    }
+
+    /* Move all the inflight requests to the failed queue. */
+    QLIST_FOREACH_SAFE(aio_req, &s->inflight_aio_head, aio_siblings, next) {
+        QLIST_REMOVE(aio_req, aio_siblings);
+        QLIST_INSERT_HEAD(&s->failed_aio_head, aio_req, aio_siblings);
+    }
+
+    /* Try to reconnect the sheepdog server every one second. */
+    while (s->fd < 0) {
+        s->fd = get_sheep_fd(s);
+        if (s->fd < 0) {
+            dprintf("Wait for connection to be established\n");
+            co_aio_sleep_ns(1000000000ULL);
+        }
+    };
+
+    /* Resend all the failed aio requests. */
+    while (!QLIST_EMPTY(&s->failed_aio_head)) {
+        aio_req = QLIST_FIRST(&s->failed_aio_head);
+        QLIST_REMOVE(aio_req, aio_siblings);
+        QLIST_INSERT_HEAD(&s->inflight_aio_head, aio_req, aio_siblings);
+        resend_aioreq(s, aio_req);
+    }
+}
+
 /*
  * Receive responses of the I/O requests.
  *
@@ -726,15 +769,11 @@ static void coroutine_fn aio_read_response(void *opaque)
     SheepdogAIOCB *acb;
     uint64_t idx;
 
-    if (QLIST_EMPTY(&s->inflight_aio_head)) {
-        goto out;
-    }
-
     /* read a header */
     ret = qemu_co_recv(fd, &rsp, sizeof(rsp));
     if (ret < sizeof(rsp)) {
         error_report("failed to get the header, %s", strerror(errno));
-        goto out;
+        goto err;
     }
 
     /* find the right aio_req from the inflight aio list */
@@ -745,7 +784,7 @@ static void coroutine_fn aio_read_response(void *opaque)
     }
     if (!aio_req) {
         error_report("cannot find aio_req %x", rsp.id);
-        goto out;
+        goto err;
     }
 
     acb = aio_req->aiocb;
@@ -785,7 +824,7 @@ static void coroutine_fn aio_read_response(void *opaque)
                             aio_req->iov_offset, rsp.data_length);
         if (ret < rsp.data_length) {
             error_report("failed to get the data, %s", strerror(errno));
-            goto out;
+            goto err;
         }
         break;
     case AIOCB_FLUSH_CACHE:
@@ -819,10 +858,9 @@ static void coroutine_fn aio_read_response(void *opaque)
         if (s->inode.vdi_id == oid_to_vid(aio_req->oid)) {
             ret = reload_inode(s, 0, "");
             if (ret < 0) {
-                goto out;
+                goto err;
             }
         }
-
         if (is_data_obj(aio_req->oid)) {
             aio_req->oid = vid_to_data_oid(s->inode.vdi_id,
                                            data_oid_to_idx(aio_req->oid));
@@ -850,6 +888,10 @@ static void coroutine_fn aio_read_response(void *opaque)
     }
 out:
     s->co_recv = NULL;
+    return;
+err:
+    s->co_recv = NULL;
+    reconnect_to_sdog(opaque);
 }
 
 static void co_read_response(void *opaque)
@@ -875,7 +917,8 @@ static int aio_flush_request(void *opaque)
     BDRVSheepdogState *s = opaque;
 
     return !QLIST_EMPTY(&s->inflight_aio_head) ||
-        !QLIST_EMPTY(&s->pending_aio_head);
+        !QLIST_EMPTY(&s->pending_aio_head) ||
+        !QLIST_EMPTY(&s->failed_aio_head);
 }
 
 /*
@@ -1150,23 +1193,21 @@ static int coroutine_fn add_aio_request(BDRVSheepdogState *s, AIOReq *aio_req,
     /* send a header */
     ret = qemu_co_send(s->fd, &hdr, sizeof(hdr));
     if (ret < sizeof(hdr)) {
-        qemu_co_mutex_unlock(&s->lock);
         error_report("failed to send a req, %s", strerror(errno));
-        return -errno;
+        goto out;
     }
 
     if (wlen) {
         ret = qemu_co_sendv(s->fd, iov, niov, aio_req->iov_offset, wlen);
         if (ret < wlen) {
-            qemu_co_mutex_unlock(&s->lock);
             error_report("failed to send a data, %s", strerror(errno));
-            return -errno;
         }
     }
-
+out:
     socket_set_cork(s->fd, 0);
     qemu_aio_set_fd_handler(s->fd, co_read_response, NULL,
                             aio_flush_request, s);
+    s->co_send = NULL;
     qemu_co_mutex_unlock(&s->lock);
 
     return 0;
@@ -1356,6 +1397,7 @@ static int sd_open(BlockDriverState *bs, QDict *options, int flags)
 
     QLIST_INIT(&s->inflight_aio_head);
     QLIST_INIT(&s->pending_aio_head);
+    QLIST_INIT(&s->failed_aio_head);
     s->fd = -1;
 
     memset(vdi, 0, sizeof(vdi));
-- 
1.8.1.3.566.gaa39828

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [Qemu-devel] [PATCH 10/11] sheepdog: make add_aio_request and send_aioreq void functions
  2013-07-23  8:30 [Qemu-devel] [PATCH 00/11] sheepdog: reconnect server after connection failure MORITA Kazutaka
                   ` (8 preceding siblings ...)
  2013-07-23  8:30 ` [Qemu-devel] [PATCH 09/11] sheepdog: try to reconnect to sheepdog after network error MORITA Kazutaka
@ 2013-07-23  8:30 ` MORITA Kazutaka
  2013-07-23  8:30 ` [Qemu-devel] [PATCH 11/11] sheepdog: cancel aio requests if possible MORITA Kazutaka
  10 siblings, 0 replies; 20+ messages in thread
From: MORITA Kazutaka @ 2013-07-23  8:30 UTC (permalink / raw)
  To: Kevin Wolf, Stefan Hajnoczi, qemu-devel; +Cc: sheepdog

These functions no longer return errors.  We can make them void
functions and simplify the codes.

Signed-off-by: MORITA Kazutaka <morita.kazutaka@lab.ntt.co.jp>
---
 block/sheepdog.c | 66 +++++++++++++++-----------------------------------------
 1 file changed, 17 insertions(+), 49 deletions(-)

diff --git a/block/sheepdog.c b/block/sheepdog.c
index cd72927..8a6c432 100644
--- a/block/sheepdog.c
+++ b/block/sheepdog.c
@@ -667,10 +667,10 @@ static int do_req(int sockfd, SheepdogReq *hdr, void *data,
     return srco.ret;
 }
 
-static int coroutine_fn add_aio_request(BDRVSheepdogState *s, AIOReq *aio_req,
+static void coroutine_fn add_aio_request(BDRVSheepdogState *s, AIOReq *aio_req,
                            struct iovec *iov, int niov, bool create,
                            enum AIOCBState aiocb_type);
-static int coroutine_fn resend_aioreq(BDRVSheepdogState *s, AIOReq *aio_req);
+static void coroutine_fn resend_aioreq(BDRVSheepdogState *s, AIOReq *aio_req);
 static int reload_inode(BDRVSheepdogState *s, uint32_t snapid, const char *tag);
 static int get_sheep_fd(BDRVSheepdogState *s);
 static void co_write_request(void *opaque);
@@ -696,22 +696,14 @@ static void coroutine_fn send_pending_req(BDRVSheepdogState *s, uint64_t oid)
 {
     AIOReq *aio_req;
     SheepdogAIOCB *acb;
-    int ret;
 
     while ((aio_req = find_pending_req(s, oid)) != NULL) {
         acb = aio_req->aiocb;
         /* move aio_req from pending list to inflight one */
         QLIST_REMOVE(aio_req, aio_siblings);
         QLIST_INSERT_HEAD(&s->inflight_aio_head, aio_req, aio_siblings);
-        ret = add_aio_request(s, aio_req, acb->qiov->iov,
-                              acb->qiov->niov, false, acb->aiocb_type);
-        if (ret < 0) {
-            error_report("add_aio_request is failed");
-            free_aio_req(s, aio_req);
-            if (!acb->nr_pending) {
-                sd_finish_aiocb(acb);
-            }
-        }
+        add_aio_request(s, aio_req, acb->qiov->iov, acb->qiov->niov, false,
+                        acb->aiocb_type);
     }
 }
 
@@ -867,11 +859,8 @@ static void coroutine_fn aio_read_response(void *opaque)
         } else {
             aio_req->oid = vid_to_vdi_oid(s->inode.vdi_id);
         }
-        ret = resend_aioreq(s, aio_req);
-        if (ret == SD_RES_SUCCESS) {
-            goto out;
-        }
-        /* fall through */
+        resend_aioreq(s, aio_req);
+        goto out;
     default:
         acb->ret = -EIO;
         error_report("%s", sd_strerror(rsp.result));
@@ -1129,7 +1118,7 @@ out:
     return ret;
 }
 
-static int coroutine_fn add_aio_request(BDRVSheepdogState *s, AIOReq *aio_req,
+static void coroutine_fn add_aio_request(BDRVSheepdogState *s, AIOReq *aio_req,
                            struct iovec *iov, int niov, bool create,
                            enum AIOCBState aiocb_type)
 {
@@ -1209,8 +1198,6 @@ out:
                             aio_flush_request, s);
     s->co_send = NULL;
     qemu_co_mutex_unlock(&s->lock);
-
-    return 0;
 }
 
 static int read_write_object(int fd, char *buf, uint64_t oid, int copies,
@@ -1313,7 +1300,7 @@ out:
     return ret;
 }
 
-static int coroutine_fn resend_aioreq(BDRVSheepdogState *s, AIOReq *aio_req)
+static void coroutine_fn resend_aioreq(BDRVSheepdogState *s, AIOReq *aio_req)
 {
     SheepdogAIOCB *acb = aio_req->aiocb;
     bool create = false;
@@ -1338,7 +1325,7 @@ static int coroutine_fn resend_aioreq(BDRVSheepdogState *s, AIOReq *aio_req)
                 dprintf("simultaneous CoW to %" PRIx64 "\n", aio_req->oid);
                 QLIST_REMOVE(aio_req, aio_siblings);
                 QLIST_INSERT_HEAD(&s->pending_aio_head, aio_req, aio_siblings);
-                return SD_RES_SUCCESS;
+                return;
             }
         }
 
@@ -1348,13 +1335,13 @@ static int coroutine_fn resend_aioreq(BDRVSheepdogState *s, AIOReq *aio_req)
     }
 out:
     if (is_data_obj(aio_req->oid)) {
-        return add_aio_request(s, aio_req, acb->qiov->iov, acb->qiov->niov,
-                               create, acb->aiocb_type);
+        add_aio_request(s, aio_req, acb->qiov->iov, acb->qiov->niov, create,
+                        acb->aiocb_type);
     } else {
         struct iovec iov;
         iov.iov_base = &s->inode;
         iov.iov_len = sizeof(s->inode);
-        return add_aio_request(s, aio_req, &iov, 1, false, AIOCB_WRITE_UDATA);
+        add_aio_request(s, aio_req, &iov, 1, false, AIOCB_WRITE_UDATA);
     }
 }
 
@@ -1744,7 +1731,6 @@ static int sd_truncate(BlockDriverState *bs, int64_t offset)
  */
 static void coroutine_fn sd_write_done(SheepdogAIOCB *acb)
 {
-    int ret;
     BDRVSheepdogState *s = acb->common.bs->opaque;
     struct iovec iov;
     AIOReq *aio_req;
@@ -1766,18 +1752,13 @@ static void coroutine_fn sd_write_done(SheepdogAIOCB *acb)
         aio_req = alloc_aio_req(s, acb, vid_to_vdi_oid(s->inode.vdi_id),
                                 data_len, offset, 0, 0, offset);
         QLIST_INSERT_HEAD(&s->inflight_aio_head, aio_req, aio_siblings);
-        ret = add_aio_request(s, aio_req, &iov, 1, false, AIOCB_WRITE_UDATA);
-        if (ret) {
-            free_aio_req(s, aio_req);
-            acb->ret = -EIO;
-            goto out;
-        }
+        add_aio_request(s, aio_req, &iov, 1, false, AIOCB_WRITE_UDATA);
 
         acb->aio_done_func = sd_finish_aiocb;
         acb->aiocb_type = AIOCB_WRITE_UDATA;
         return;
     }
-out:
+
     sd_finish_aiocb(acb);
 }
 
@@ -1984,14 +1965,8 @@ static int coroutine_fn sd_co_rw_vector(void *p)
         }
 
         QLIST_INSERT_HEAD(&s->inflight_aio_head, aio_req, aio_siblings);
-        ret = add_aio_request(s, aio_req, acb->qiov->iov, acb->qiov->niov,
-                              create, acb->aiocb_type);
-        if (ret < 0) {
-            error_report("add_aio_request is failed");
-            free_aio_req(s, aio_req);
-            acb->ret = -EIO;
-            goto out;
-        }
+        add_aio_request(s, aio_req, acb->qiov->iov, acb->qiov->niov, create,
+                        acb->aiocb_type);
     done:
         offset = 0;
         idx++;
@@ -2059,7 +2034,6 @@ static int coroutine_fn sd_co_flush_to_disk(BlockDriverState *bs)
     BDRVSheepdogState *s = bs->opaque;
     SheepdogAIOCB *acb;
     AIOReq *aio_req;
-    int ret;
 
     if (s->cache_flags != SD_FLAG_CMD_CACHE) {
         return 0;
@@ -2072,13 +2046,7 @@ static int coroutine_fn sd_co_flush_to_disk(BlockDriverState *bs)
     aio_req = alloc_aio_req(s, acb, vid_to_vdi_oid(s->inode.vdi_id),
                             0, 0, 0, 0, 0);
     QLIST_INSERT_HEAD(&s->inflight_aio_head, aio_req, aio_siblings);
-    ret = add_aio_request(s, aio_req, NULL, 0, false, acb->aiocb_type);
-    if (ret < 0) {
-        error_report("add_aio_request is failed");
-        free_aio_req(s, aio_req);
-        qemu_aio_release(acb);
-        return ret;
-    }
+    add_aio_request(s, aio_req, NULL, 0, false, acb->aiocb_type);
 
     qemu_coroutine_yield();
     return acb->ret;
-- 
1.8.1.3.566.gaa39828

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [Qemu-devel] [PATCH 11/11] sheepdog: cancel aio requests if possible
  2013-07-23  8:30 [Qemu-devel] [PATCH 00/11] sheepdog: reconnect server after connection failure MORITA Kazutaka
                   ` (9 preceding siblings ...)
  2013-07-23  8:30 ` [Qemu-devel] [PATCH 10/11] sheepdog: make add_aio_request and send_aioreq void functions MORITA Kazutaka
@ 2013-07-23  8:30 ` MORITA Kazutaka
  10 siblings, 0 replies; 20+ messages in thread
From: MORITA Kazutaka @ 2013-07-23  8:30 UTC (permalink / raw)
  To: Kevin Wolf, Stefan Hajnoczi, qemu-devel; +Cc: sheepdog

This patch tries to cancel aio requests in pending queue and failed
queue.  When the sheepdog driver cannot cancel the requests, it waits
for them to be completed.

Signed-off-by: MORITA Kazutaka <morita.kazutaka@lab.ntt.co.jp>
---
 block/sheepdog.c | 70 +++++++++++++++++++++++++++++++++++++++++++++++---------
 1 file changed, 59 insertions(+), 11 deletions(-)

diff --git a/block/sheepdog.c b/block/sheepdog.c
index 8a6c432..43be479 100644
--- a/block/sheepdog.c
+++ b/block/sheepdog.c
@@ -294,7 +294,8 @@ struct SheepdogAIOCB {
     Coroutine *coroutine;
     void (*aio_done_func)(SheepdogAIOCB *);
 
-    bool canceled;
+    bool cancelable;
+    bool *finished;
     int nr_pending;
 };
 
@@ -411,6 +412,7 @@ static inline void free_aio_req(BDRVSheepdogState *s, AIOReq *aio_req)
 {
     SheepdogAIOCB *acb = aio_req->aiocb;
 
+    acb->cancelable = false;
     QLIST_REMOVE(aio_req, aio_siblings);
     g_free(aio_req);
 
@@ -419,23 +421,68 @@ static inline void free_aio_req(BDRVSheepdogState *s, AIOReq *aio_req)
 
 static void coroutine_fn sd_finish_aiocb(SheepdogAIOCB *acb)
 {
-    if (!acb->canceled) {
-        qemu_coroutine_enter(acb->coroutine, NULL);
+    qemu_coroutine_enter(acb->coroutine, NULL);
+    if (acb->finished) {
+        *acb->finished = true;
     }
     qemu_aio_release(acb);
 }
 
+/*
+ * Check whether the specified acb can be canceled
+ *
+ * We can cancel aio when any request belonging to the acb is:
+ *  - Not processed by the sheepdog server.
+ *  - Not linked to the inflight queue.
+ */
+static bool sd_acb_cancelable(const SheepdogAIOCB *acb)
+{
+    BDRVSheepdogState *s = acb->common.bs->opaque;
+    AIOReq *aioreq;
+
+    if (!acb->cancelable) {
+        return false;
+    }
+
+    QLIST_FOREACH(aioreq, &s->inflight_aio_head, aio_siblings) {
+        if (aioreq->aiocb == acb) {
+            return false;
+        }
+    }
+
+    return false;
+}
+
 static void sd_aio_cancel(BlockDriverAIOCB *blockacb)
 {
     SheepdogAIOCB *acb = (SheepdogAIOCB *)blockacb;
+    BDRVSheepdogState *s = acb->common.bs->opaque;
+    AIOReq *aioreq, *next;
+    bool finished = false;
+
+    acb->finished = &finished;
+    while (!finished) {
+        if (sd_acb_cancelable(acb)) {
+            /* Remove outstanding requests from pending and failed queues.  */
+            QLIST_FOREACH_SAFE(aioreq, &s->pending_aio_head, aio_siblings,
+                               next) {
+                if (aioreq->aiocb == acb) {
+                    free_aio_req(s, aioreq);
+                }
+            }
+            QLIST_FOREACH_SAFE(aioreq, &s->failed_aio_head, aio_siblings,
+                               next) {
+                if (aioreq->aiocb == acb) {
+                    free_aio_req(s, aioreq);
+                }
+            }
 
-    /*
-     * Sheepdog cannot cancel the requests which are already sent to
-     * the servers, so we just complete the request with -EIO here.
-     */
-    acb->ret = -EIO;
-    qemu_coroutine_enter(acb->coroutine, NULL);
-    acb->canceled = true;
+            assert(acb->nr_pending == 0);
+            sd_finish_aiocb(acb);
+            return;
+        }
+        qemu_aio_wait();
+    }
 }
 
 static const AIOCBInfo sd_aiocb_info = {
@@ -456,7 +503,8 @@ static SheepdogAIOCB *sd_aio_setup(BlockDriverState *bs, QEMUIOVector *qiov,
     acb->nb_sectors = nb_sectors;
 
     acb->aio_done_func = NULL;
-    acb->canceled = false;
+    acb->cancelable = true;
+    acb->finished = NULL;
     acb->coroutine = qemu_coroutine_self();
     acb->ret = 0;
     acb->nr_pending = 0;
-- 
1.8.1.3.566.gaa39828

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* Re: [Qemu-devel] [PATCH for-1.6 01/11] ignore SIGPIPE in qemu-img and qemu-io
  2013-07-23  8:30 ` [Qemu-devel] [PATCH 01/11] ignore SIGPIPE in qemu-img and qemu-io MORITA Kazutaka
@ 2013-07-23  9:19   ` Paolo Bonzini
  2013-08-03  3:52     ` Doug Goldstein
  0 siblings, 1 reply; 20+ messages in thread
From: Paolo Bonzini @ 2013-07-23  9:19 UTC (permalink / raw)
  To: MORITA Kazutaka
  Cc: Kevin Wolf, sheepdog, qemu-devel, Stefan Hajnoczi, qemu-stable

Il 23/07/2013 10:30, MORITA Kazutaka ha scritto:
> This prevents the tools from being stopped when they write data to a
> closed connection in the other side.
> 
> Signed-off-by: MORITA Kazutaka <morita.kazutaka@lab.ntt.co.jp>
> ---
>  qemu-img.c | 4 ++++
>  qemu-io.c  | 4 ++++
>  2 files changed, 8 insertions(+)
> 
> diff --git a/qemu-img.c b/qemu-img.c
> index c55ca5c..919d464 100644
> --- a/qemu-img.c
> +++ b/qemu-img.c
> @@ -2319,6 +2319,10 @@ int main(int argc, char **argv)
>      const img_cmd_t *cmd;
>      const char *cmdname;
>  
> +#ifdef CONFIG_POSIX
> +    signal(SIGPIPE, SIG_IGN);
> +#endif
> +
>      error_set_progname(argv[0]);
>  
>      qemu_init_main_loop();
> diff --git a/qemu-io.c b/qemu-io.c
> index cb9def5..d54dc86 100644
> --- a/qemu-io.c
> +++ b/qemu-io.c
> @@ -335,6 +335,10 @@ int main(int argc, char **argv)
>      int opt_index = 0;
>      int flags = BDRV_O_UNMAP;
>  
> +#ifdef CONFIG_POSIX
> +    signal(SIGPIPE, SIG_IGN);
> +#endif
> +
>      progname = basename(argv[0]);
>  
>      while ((c = getopt_long(argc, argv, sopt, lopt, &opt_index)) != -1) {
> 

Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>

and adding qemu-stable for this one.

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [Qemu-devel] [PATCH 02/11] iov: handle EOF in iov_send_recv
  2013-07-23  8:30 ` [Qemu-devel] [PATCH 02/11] iov: handle EOF in iov_send_recv MORITA Kazutaka
@ 2013-07-23 11:28   ` Paolo Bonzini
  2013-08-03  3:48     ` Doug Goldstein
  0 siblings, 1 reply; 20+ messages in thread
From: Paolo Bonzini @ 2013-07-23 11:28 UTC (permalink / raw)
  To: MORITA Kazutaka
  Cc: Kevin Wolf, sheepdog, qemu-devel, Stefan Hajnoczi, qemu-stable

Il 23/07/2013 10:30, MORITA Kazutaka ha scritto:
> Without this patch, iov_send_recv() never returns when do_send_recv()
> returns zero.
> 
> Signed-off-by: MORITA Kazutaka <morita.kazutaka@lab.ntt.co.jp>
> ---
>  util/iov.c | 6 ++++++
>  1 file changed, 6 insertions(+)
> 
> diff --git a/util/iov.c b/util/iov.c
> index cc6e837..f705586 100644
> --- a/util/iov.c
> +++ b/util/iov.c
> @@ -202,6 +202,12 @@ ssize_t iov_send_recv(int sockfd, struct iovec *iov, unsigned iov_cnt,
>              return -1;
>          }
>  
> +        if (ret == 0 && !do_send) {
> +            /* recv returns 0 when the peer has performed an orderly
> +             * shutdown. */
> +            break;
> +        }
> +
>          /* Prepare for the next iteration */
>          offset += ret;
>          total += ret;
> 

Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>

... and should also be in 1.5.2.

Paolo

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [Qemu-devel] [PATCH 03/11] qemu-sockets: make wait_for_connect be invoked in qemu_aio_wait
  2013-07-23  8:30 ` [Qemu-devel] [PATCH 03/11] qemu-sockets: make wait_for_connect be invoked in qemu_aio_wait MORITA Kazutaka
@ 2013-07-23 11:36   ` Paolo Bonzini
  2013-07-24  7:41     ` [Qemu-devel] [sheepdog] " MORITA Kazutaka
  0 siblings, 1 reply; 20+ messages in thread
From: Paolo Bonzini @ 2013-07-23 11:36 UTC (permalink / raw)
  To: MORITA Kazutaka; +Cc: Kevin Wolf, sheepdog, qemu-devel, Stefan Hajnoczi

Il 23/07/2013 10:30, MORITA Kazutaka ha scritto:
> This allows us to use inet_nonblocking_connect() and
> unix_nonblocking_connect() in block drivers.
> 
> qemu-ga needs to link block-obj to resolve dependencies of
> qemu_aio_set_fd_handler().
> 
> Signed-off-by: MORITA Kazutaka <morita.kazutaka@lab.ntt.co.jp>

I'm not sure this is safe.  You could have e.g. migration start during
qemu_aio_wait().

Paolo

> ---
>  Makefile            |  4 ++--
>  util/qemu-sockets.c | 15 ++++++++++-----
>  2 files changed, 12 insertions(+), 7 deletions(-)
> 
> diff --git a/Makefile b/Makefile
> index c06bfab..5fe2e0f 100644
> --- a/Makefile
> +++ b/Makefile
> @@ -197,7 +197,7 @@ fsdev/virtfs-proxy-helper$(EXESUF): LIBS += -lcap
>  qemu-img-cmds.h: $(SRC_PATH)/qemu-img-cmds.hx
>  	$(call quiet-command,sh $(SRC_PATH)/scripts/hxtool -h < $< > $@,"  GEN   $@")
>  
> -qemu-ga$(EXESUF): LIBS = $(LIBS_QGA)
> +qemu-ga$(EXESUF): LIBS = $(LIBS_QGA) $(LIBS_TOOLS)
>  qemu-ga$(EXESUF): QEMU_CFLAGS += -I qga/qapi-generated
>  
>  gen-out-type = $(subst .,-,$(suffix $@))
> @@ -227,7 +227,7 @@ $(SRC_PATH)/qapi-schema.json $(SRC_PATH)/scripts/qapi-commands.py $(qapi-py)
>  QGALIB_GEN=$(addprefix qga/qapi-generated/, qga-qapi-types.h qga-qapi-visit.h qga-qmp-commands.h)
>  $(qga-obj-y) qemu-ga.o: $(QGALIB_GEN)
>  
> -qemu-ga$(EXESUF): $(qga-obj-y) libqemuutil.a libqemustub.a
> +qemu-ga$(EXESUF): $(qga-obj-y) $(block-obj-y) libqemuutil.a libqemustub.a
>  	$(call LINK, $^)
>  
>  clean:
> diff --git a/util/qemu-sockets.c b/util/qemu-sockets.c
> index 095716e..8b21fd1 100644
> --- a/util/qemu-sockets.c
> +++ b/util/qemu-sockets.c
> @@ -218,6 +218,11 @@ typedef struct ConnectState {
>  static int inet_connect_addr(struct addrinfo *addr, bool *in_progress,
>                               ConnectState *connect_state, Error **errp);
>  
> +static int return_true(void *opaque)
> +{
> +    return 1;
> +}
> +
>  static void wait_for_connect(void *opaque)
>  {
>      ConnectState *s = opaque;
> @@ -225,7 +230,7 @@ static void wait_for_connect(void *opaque)
>      socklen_t valsize = sizeof(val);
>      bool in_progress;
>  
> -    qemu_set_fd_handler2(s->fd, NULL, NULL, NULL, NULL);
> +    qemu_aio_set_fd_handler(s->fd, NULL, NULL, NULL, NULL);
>  
>      do {
>          rc = qemu_getsockopt(s->fd, SOL_SOCKET, SO_ERROR, &val, &valsize);
> @@ -288,8 +293,8 @@ static int inet_connect_addr(struct addrinfo *addr, bool *in_progress,
>  
>      if (connect_state != NULL && QEMU_SOCKET_RC_INPROGRESS(rc)) {
>          connect_state->fd = sock;
> -        qemu_set_fd_handler2(sock, NULL, NULL, wait_for_connect,
> -                             connect_state);
> +        qemu_aio_set_fd_handler(sock, NULL, wait_for_connect, return_true,
> +                                connect_state);
>          *in_progress = true;
>      } else if (rc < 0) {
>          error_set_errno(errp, errno, QERR_SOCKET_CONNECT_FAILED);
> @@ -749,8 +754,8 @@ int unix_connect_opts(QemuOpts *opts, Error **errp,
>  
>      if (connect_state != NULL && QEMU_SOCKET_RC_INPROGRESS(rc)) {
>          connect_state->fd = sock;
> -        qemu_set_fd_handler2(sock, NULL, NULL, wait_for_connect,
> -                             connect_state);
> +        qemu_aio_set_fd_handler(sock, NULL, wait_for_connect, return_true,
> +                                connect_state);
>          return sock;
>      } else if (rc >= 0) {
>          /* non blocking socket immediate success, call callback */
> 

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [Qemu-devel] [sheepdog] [PATCH 03/11] qemu-sockets: make wait_for_connect be invoked in qemu_aio_wait
  2013-07-23 11:36   ` Paolo Bonzini
@ 2013-07-24  7:41     ` MORITA Kazutaka
  0 siblings, 0 replies; 20+ messages in thread
From: MORITA Kazutaka @ 2013-07-24  7:41 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Kevin Wolf, Stefan Hajnoczi, sheepdog, qemu-devel,
	MORITA Kazutaka

At Tue, 23 Jul 2013 13:36:08 +0200,
Paolo Bonzini wrote:
> 
> Il 23/07/2013 10:30, MORITA Kazutaka ha scritto:
> > This allows us to use inet_nonblocking_connect() and
> > unix_nonblocking_connect() in block drivers.
> > 
> > qemu-ga needs to link block-obj to resolve dependencies of
> > qemu_aio_set_fd_handler().
> > 
> > Signed-off-by: MORITA Kazutaka <morita.kazutaka@lab.ntt.co.jp>
> 
> I'm not sure this is safe.  You could have e.g. migration start during
> qemu_aio_wait().

I thought that it is safe.  Qemu creates another thread for migration
and it can be started at any time, either way.  However, so as not to
hurt the existing codes, it might be better to create another
nonblocking connect for qemu_aio_wait().

I think of dropping this patch from this series and will leave it for
another day.  Usually, sheepdog users prepare a local sheepdog daemon
to be connected to, and connect() is unlikely to sleep for a long
time.  Using a blocking connect wouldn't be a big problem.

Thanks,

Kazutaka

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [Qemu-devel] [PATCH 02/11] iov: handle EOF in iov_send_recv
  2013-07-23 11:28   ` Paolo Bonzini
@ 2013-08-03  3:48     ` Doug Goldstein
  2013-08-05 12:30       ` Kevin Wolf
  0 siblings, 1 reply; 20+ messages in thread
From: Doug Goldstein @ 2013-08-03  3:48 UTC (permalink / raw)
  To: qemu-devel
  Cc: Kevin Wolf, sheepdog, Stefan Hajnoczi, MORITA Kazutaka,
	qemu-stable

On Tue, Jul 23, 2013 at 6:28 AM, Paolo Bonzini <pbonzini@redhat.com> wrote:
> Il 23/07/2013 10:30, MORITA Kazutaka ha scritto:
>> Without this patch, iov_send_recv() never returns when do_send_recv()
>> returns zero.
>>
>> Signed-off-by: MORITA Kazutaka <morita.kazutaka@lab.ntt.co.jp>
>> ---
>>  util/iov.c | 6 ++++++
>>  1 file changed, 6 insertions(+)
>>
>> diff --git a/util/iov.c b/util/iov.c
>> index cc6e837..f705586 100644
>> --- a/util/iov.c
>> +++ b/util/iov.c
>> @@ -202,6 +202,12 @@ ssize_t iov_send_recv(int sockfd, struct iovec *iov, unsigned iov_cnt,
>>              return -1;
>>          }
>>
>> +        if (ret == 0 && !do_send) {
>> +            /* recv returns 0 when the peer has performed an orderly
>> +             * shutdown. */
>> +            break;
>> +        }
>> +
>>          /* Prepare for the next iteration */
>>          offset += ret;
>>          total += ret;
>>
>
> Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
>
> ... and should also be in 1.5.2.
>
> Paolo
>

Nudge so this doesn't get forgotten about. It hasn't hit master yet.

-- 
Doug Goldstein

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [Qemu-devel] [PATCH for-1.6 01/11] ignore SIGPIPE in qemu-img and qemu-io
  2013-07-23  9:19   ` [Qemu-devel] [PATCH for-1.6 " Paolo Bonzini
@ 2013-08-03  3:52     ` Doug Goldstein
  2013-08-05 11:57       ` Kevin Wolf
  0 siblings, 1 reply; 20+ messages in thread
From: Doug Goldstein @ 2013-08-03  3:52 UTC (permalink / raw)
  To: qemu-devel
  Cc: Kevin Wolf, sheepdog, Stefan Hajnoczi, MORITA Kazutaka,
	qemu-stable

On Tue, Jul 23, 2013 at 4:19 AM, Paolo Bonzini <pbonzini@redhat.com> wrote:
> Il 23/07/2013 10:30, MORITA Kazutaka ha scritto:
>> This prevents the tools from being stopped when they write data to a
>> closed connection in the other side.
>>
>> Signed-off-by: MORITA Kazutaka <morita.kazutaka@lab.ntt.co.jp>
>> ---
>>  qemu-img.c | 4 ++++
>>  qemu-io.c  | 4 ++++
>>  2 files changed, 8 insertions(+)
>>
>> diff --git a/qemu-img.c b/qemu-img.c
>> index c55ca5c..919d464 100644
>> --- a/qemu-img.c
>> +++ b/qemu-img.c
>> @@ -2319,6 +2319,10 @@ int main(int argc, char **argv)
>>      const img_cmd_t *cmd;
>>      const char *cmdname;
>>
>> +#ifdef CONFIG_POSIX
>> +    signal(SIGPIPE, SIG_IGN);
>> +#endif
>> +
>>      error_set_progname(argv[0]);
>>
>>      qemu_init_main_loop();
>> diff --git a/qemu-io.c b/qemu-io.c
>> index cb9def5..d54dc86 100644
>> --- a/qemu-io.c
>> +++ b/qemu-io.c
>> @@ -335,6 +335,10 @@ int main(int argc, char **argv)
>>      int opt_index = 0;
>>      int flags = BDRV_O_UNMAP;
>>
>> +#ifdef CONFIG_POSIX
>> +    signal(SIGPIPE, SIG_IGN);
>> +#endif
>> +
>>      progname = basename(argv[0]);
>>
>>      while ((c = getopt_long(argc, argv, sopt, lopt, &opt_index)) != -1) {
>>
>
> Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
>
> and adding qemu-stable for this one.
>

Nudge so this isn't forgotten about since it hasn't hit master yet.

-- 
Doug Goldstein

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [Qemu-devel] [PATCH for-1.6 01/11] ignore SIGPIPE in qemu-img and qemu-io
  2013-08-03  3:52     ` Doug Goldstein
@ 2013-08-05 11:57       ` Kevin Wolf
  0 siblings, 0 replies; 20+ messages in thread
From: Kevin Wolf @ 2013-08-05 11:57 UTC (permalink / raw)
  To: Doug Goldstein
  Cc: Stefan Hajnoczi, sheepdog, qemu-devel, MORITA Kazutaka,
	qemu-stable

Am 03.08.2013 um 05:52 hat Doug Goldstein geschrieben:
> On Tue, Jul 23, 2013 at 4:19 AM, Paolo Bonzini <pbonzini@redhat.com> wrote:
> > Il 23/07/2013 10:30, MORITA Kazutaka ha scritto:
> >> This prevents the tools from being stopped when they write data to a
> >> closed connection in the other side.
> >>
> >> Signed-off-by: MORITA Kazutaka <morita.kazutaka@lab.ntt.co.jp>
> >> ---
> >>  qemu-img.c | 4 ++++
> >>  qemu-io.c  | 4 ++++
> >>  2 files changed, 8 insertions(+)
> >>
> >> diff --git a/qemu-img.c b/qemu-img.c
> >> index c55ca5c..919d464 100644
> >> --- a/qemu-img.c
> >> +++ b/qemu-img.c
> >> @@ -2319,6 +2319,10 @@ int main(int argc, char **argv)
> >>      const img_cmd_t *cmd;
> >>      const char *cmdname;
> >>
> >> +#ifdef CONFIG_POSIX
> >> +    signal(SIGPIPE, SIG_IGN);
> >> +#endif
> >> +
> >>      error_set_progname(argv[0]);
> >>
> >>      qemu_init_main_loop();
> >> diff --git a/qemu-io.c b/qemu-io.c
> >> index cb9def5..d54dc86 100644
> >> --- a/qemu-io.c
> >> +++ b/qemu-io.c
> >> @@ -335,6 +335,10 @@ int main(int argc, char **argv)
> >>      int opt_index = 0;
> >>      int flags = BDRV_O_UNMAP;
> >>
> >> +#ifdef CONFIG_POSIX
> >> +    signal(SIGPIPE, SIG_IGN);
> >> +#endif
> >> +
> >>      progname = basename(argv[0]);
> >>
> >>      while ((c = getopt_long(argc, argv, sopt, lopt, &opt_index)) != -1) {
> >>
> >
> > Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
> >
> > and adding qemu-stable for this one.
> >
> 
> Nudge so this isn't forgotten about since it hasn't hit master yet.

Thanks, applied to the block branch.

Kevin

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [Qemu-devel] [PATCH 02/11] iov: handle EOF in iov_send_recv
  2013-08-03  3:48     ` Doug Goldstein
@ 2013-08-05 12:30       ` Kevin Wolf
  0 siblings, 0 replies; 20+ messages in thread
From: Kevin Wolf @ 2013-08-05 12:30 UTC (permalink / raw)
  To: Doug Goldstein
  Cc: Stefan Hajnoczi, sheepdog, qemu-devel, MORITA Kazutaka,
	qemu-stable

Am 03.08.2013 um 05:48 hat Doug Goldstein geschrieben:
> On Tue, Jul 23, 2013 at 6:28 AM, Paolo Bonzini <pbonzini@redhat.com> wrote:
> > Il 23/07/2013 10:30, MORITA Kazutaka ha scritto:
> >> Without this patch, iov_send_recv() never returns when do_send_recv()
> >> returns zero.
> >>
> >> Signed-off-by: MORITA Kazutaka <morita.kazutaka@lab.ntt.co.jp>
> >> ---
> >>  util/iov.c | 6 ++++++
> >>  1 file changed, 6 insertions(+)
> >>
> >> diff --git a/util/iov.c b/util/iov.c
> >> index cc6e837..f705586 100644
> >> --- a/util/iov.c
> >> +++ b/util/iov.c
> >> @@ -202,6 +202,12 @@ ssize_t iov_send_recv(int sockfd, struct iovec *iov, unsigned iov_cnt,
> >>              return -1;
> >>          }
> >>
> >> +        if (ret == 0 && !do_send) {
> >> +            /* recv returns 0 when the peer has performed an orderly
> >> +             * shutdown. */
> >> +            break;
> >> +        }
> >> +
> >>          /* Prepare for the next iteration */
> >>          offset += ret;
> >>          total += ret;
> >>
> >
> > Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
> >
> > ... and should also be in 1.5.2.
> >
> > Paolo
> >
> 
> Nudge so this doesn't get forgotten about. It hasn't hit master yet.

Thanks, applied to the block branch.

Kevin

^ permalink raw reply	[flat|nested] 20+ messages in thread

end of thread, other threads:[~2013-08-05 12:30 UTC | newest]

Thread overview: 20+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2013-07-23  8:30 [Qemu-devel] [PATCH 00/11] sheepdog: reconnect server after connection failure MORITA Kazutaka
2013-07-23  8:30 ` [Qemu-devel] [PATCH 01/11] ignore SIGPIPE in qemu-img and qemu-io MORITA Kazutaka
2013-07-23  9:19   ` [Qemu-devel] [PATCH for-1.6 " Paolo Bonzini
2013-08-03  3:52     ` Doug Goldstein
2013-08-05 11:57       ` Kevin Wolf
2013-07-23  8:30 ` [Qemu-devel] [PATCH 02/11] iov: handle EOF in iov_send_recv MORITA Kazutaka
2013-07-23 11:28   ` Paolo Bonzini
2013-08-03  3:48     ` Doug Goldstein
2013-08-05 12:30       ` Kevin Wolf
2013-07-23  8:30 ` [Qemu-devel] [PATCH 03/11] qemu-sockets: make wait_for_connect be invoked in qemu_aio_wait MORITA Kazutaka
2013-07-23 11:36   ` Paolo Bonzini
2013-07-24  7:41     ` [Qemu-devel] [sheepdog] " MORITA Kazutaka
2013-07-23  8:30 ` [Qemu-devel] [PATCH 04/11] sheepdog: make connect nonblocking MORITA Kazutaka
2013-07-23  8:30 ` [Qemu-devel] [PATCH 05/11] sheepdog: check return values of qemu_co_recv/send correctly MORITA Kazutaka
2013-07-23  8:30 ` [Qemu-devel] [PATCH 06/11] sheepdog: handle vdi objects in resend_aio_req MORITA Kazutaka
2013-07-23  8:30 ` [Qemu-devel] [PATCH 07/11] sheepdog: reload inode outside of resend_aioreq MORITA Kazutaka
2013-07-23  8:30 ` [Qemu-devel] [PATCH 08/11] coroutine: add co_aio_sleep_ns() to allow sleep in block drivers MORITA Kazutaka
2013-07-23  8:30 ` [Qemu-devel] [PATCH 09/11] sheepdog: try to reconnect to sheepdog after network error MORITA Kazutaka
2013-07-23  8:30 ` [Qemu-devel] [PATCH 10/11] sheepdog: make add_aio_request and send_aioreq void functions MORITA Kazutaka
2013-07-23  8:30 ` [Qemu-devel] [PATCH 11/11] sheepdog: cancel aio requests if possible MORITA Kazutaka

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).