* [Qemu-devel] [PULL 0/3] NBD changes for 2013-04-19 @ 2013-04-19 14:30 Paolo Bonzini 2013-04-19 14:30 ` [Qemu-devel] [PATCH 1/3] nbd: unlock mutex in nbd_co_send_request() error path Paolo Bonzini ` (2 more replies) 0 siblings, 3 replies; 5+ messages in thread From: Paolo Bonzini @ 2013-04-19 14:30 UTC (permalink / raw) To: qemu-devel; +Cc: stefanha The following changes since commit e2ec3f976803b360c70d9ae2ba13852fa5d11665: qjson: to_json() case QTYPE_QSTRING is buggy, rewrite (2013-04-13 19:40:25 +0000) are available in the git repository at: git://github.com/bonzini/qemu.git nbd-next for you to fetch changes up to 97ebbab0e324831dff47dbfa4bed55808cb3ec74: nbd: set TCP_NODELAY (2013-04-15 16:35:17 +0200) ---------------------------------------------------------------- Stefan Hajnoczi (3): nbd: unlock mutex in nbd_co_send_request() error path nbd: use TCP_CORK in nbd_co_send_request() nbd: set TCP_NODELAY block/nbd.c | 25 +++++++++++++++++++------ 1 file changed, 19 insertions(+), 6 deletions(-) -- 1.8.1.4 ^ permalink raw reply [flat|nested] 5+ messages in thread
* [Qemu-devel] [PATCH 1/3] nbd: unlock mutex in nbd_co_send_request() error path 2013-04-19 14:30 [Qemu-devel] [PULL 0/3] NBD changes for 2013-04-19 Paolo Bonzini @ 2013-04-19 14:30 ` Paolo Bonzini 2013-04-19 14:30 ` [Qemu-devel] [PATCH 2/3] nbd: use TCP_CORK in nbd_co_send_request() Paolo Bonzini 2013-04-19 14:30 ` [Qemu-devel] [PATCH 3/3] nbd: set TCP_NODELAY Paolo Bonzini 2 siblings, 0 replies; 5+ messages in thread From: Paolo Bonzini @ 2013-04-19 14:30 UTC (permalink / raw) To: qemu-devel; +Cc: qemu-stable, stefanha From: Stefan Hajnoczi <stefanha@redhat.com> Cc: qemu-stable@nongnu.org Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> --- block/nbd.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/block/nbd.c b/block/nbd.c index eff683c..662df16 100644 --- a/block/nbd.c +++ b/block/nbd.c @@ -339,7 +339,7 @@ static int nbd_co_send_request(BDRVNBDState *s, struct nbd_request *request, ret = qemu_co_sendv(s->sock, qiov->iov, qiov->niov, offset, request->len); if (ret != request->len) { - return -EIO; + rc = -EIO; } } qemu_aio_set_fd_handler(s->sock, nbd_reply_ready, NULL, -- 1.8.1.4 ^ permalink raw reply related [flat|nested] 5+ messages in thread
* [Qemu-devel] [PATCH 2/3] nbd: use TCP_CORK in nbd_co_send_request() 2013-04-19 14:30 [Qemu-devel] [PULL 0/3] NBD changes for 2013-04-19 Paolo Bonzini 2013-04-19 14:30 ` [Qemu-devel] [PATCH 1/3] nbd: unlock mutex in nbd_co_send_request() error path Paolo Bonzini @ 2013-04-19 14:30 ` Paolo Bonzini 2013-04-19 14:30 ` [Qemu-devel] [PATCH 3/3] nbd: set TCP_NODELAY Paolo Bonzini 2 siblings, 0 replies; 5+ messages in thread From: Paolo Bonzini @ 2013-04-19 14:30 UTC (permalink / raw) To: qemu-devel; +Cc: stefanha From: Stefan Hajnoczi <stefanha@redhat.com> Use TCP_CORK to defer packet transmission until both the header and the payload have been written. Suggested-by: Nick Thomas <nick@bytemark.co.uk> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> --- block/nbd.c | 22 ++++++++++++++++------ 1 file changed, 16 insertions(+), 6 deletions(-) diff --git a/block/nbd.c b/block/nbd.c index 662df16..485bbf0 100644 --- a/block/nbd.c +++ b/block/nbd.c @@ -334,13 +334,23 @@ static int nbd_co_send_request(BDRVNBDState *s, struct nbd_request *request, s->send_coroutine = qemu_coroutine_self(); qemu_aio_set_fd_handler(s->sock, nbd_reply_ready, nbd_restart_write, nbd_have_request, s); - rc = nbd_send_request(s->sock, request); - if (rc >= 0 && qiov) { - ret = qemu_co_sendv(s->sock, qiov->iov, qiov->niov, - offset, request->len); - if (ret != request->len) { - rc = -EIO; + if (qiov) { + if (!s->is_unix) { + socket_set_cork(s->sock, 1); } + rc = nbd_send_request(s->sock, request); + if (rc >= 0) { + ret = qemu_co_sendv(s->sock, qiov->iov, qiov->niov, + offset, request->len); + if (ret != request->len) { + rc = -EIO; + } + } + if (!s->is_unix) { + socket_set_cork(s->sock, 0); + } + } else { + rc = nbd_send_request(s->sock, request); } qemu_aio_set_fd_handler(s->sock, nbd_reply_ready, NULL, nbd_have_request, s); -- 1.8.1.4 ^ permalink raw reply related [flat|nested] 5+ messages in thread
* [Qemu-devel] [PATCH 3/3] nbd: set TCP_NODELAY 2013-04-19 14:30 [Qemu-devel] [PULL 0/3] NBD changes for 2013-04-19 Paolo Bonzini 2013-04-19 14:30 ` [Qemu-devel] [PATCH 1/3] nbd: unlock mutex in nbd_co_send_request() error path Paolo Bonzini 2013-04-19 14:30 ` [Qemu-devel] [PATCH 2/3] nbd: use TCP_CORK in nbd_co_send_request() Paolo Bonzini @ 2013-04-19 14:30 ` Paolo Bonzini 2 siblings, 0 replies; 5+ messages in thread From: Paolo Bonzini @ 2013-04-19 14:30 UTC (permalink / raw) To: qemu-devel; +Cc: stefanha From: Stefan Hajnoczi <stefanha@redhat.com> Disable the Nagle algorithm to reduce latency. Note this means we must also use TCP_CORK when sending header followed by payload to avoid fragmenting lots of little packets. The previous patch took care of that. Suggested-by: Nick Thomas <nick@bytemark.co.uk> Tested-by: Nick Thomas <nick@bytemark.co.uk> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> --- block/nbd.c | 3 +++ 1 file changed, 3 insertions(+) diff --git a/block/nbd.c b/block/nbd.c index 485bbf0..d9dc454 100644 --- a/block/nbd.c +++ b/block/nbd.c @@ -406,6 +406,9 @@ static int nbd_establish_connection(BlockDriverState *bs) sock = unix_socket_outgoing(qemu_opt_get(s->socket_opts, "path")); } else { sock = tcp_socket_outgoing_opts(s->socket_opts); + if (sock >= 0) { + socket_set_nodelay(sock); + } } /* Failed to establish connection */ -- 1.8.1.4 ^ permalink raw reply related [flat|nested] 5+ messages in thread
* [Qemu-devel] [PATCH 0/3] nbd: use TCP_NODELAY @ 2013-04-15 14:14 Stefan Hajnoczi 2013-04-15 14:14 ` [Qemu-devel] [PATCH 2/3] nbd: use TCP_CORK in nbd_co_send_request() Stefan Hajnoczi 0 siblings, 1 reply; 5+ messages in thread From: Stefan Hajnoczi @ 2013-04-15 14:14 UTC (permalink / raw) To: qemu-devel; +Cc: Kevin Wolf, Paolo Bonzini, Stefan Hajnoczi, Nick Thomas The nbd block driver should use TCP_NODELAY. Nick Thomas <nick@bytemark.co.uk> measured a 40 millisecond latency added by the Naggle algorithm. This series turns on TCP_NODELAY. This requires that we use TCP_CORK to efficiently send NBD requests that contain a payload after the header. Finally, fix a bug where we forget to unlock a mutex when sending fails. Stefan Hajnoczi (3): nbd: unlock mutex in nbd_co_send_request() error path nbd: use TCP_CORK in nbd_co_send_request() nbd: set TCP_NODELAY block/nbd.c | 25 +++++++++++++++++++------ 1 file changed, 19 insertions(+), 6 deletions(-) -- 1.8.1.4 ^ permalink raw reply [flat|nested] 5+ messages in thread
* [Qemu-devel] [PATCH 2/3] nbd: use TCP_CORK in nbd_co_send_request() 2013-04-15 14:14 [Qemu-devel] [PATCH 0/3] nbd: use TCP_NODELAY Stefan Hajnoczi @ 2013-04-15 14:14 ` Stefan Hajnoczi 0 siblings, 0 replies; 5+ messages in thread From: Stefan Hajnoczi @ 2013-04-15 14:14 UTC (permalink / raw) To: qemu-devel; +Cc: Kevin Wolf, Paolo Bonzini, Stefan Hajnoczi, Nick Thomas Use TCP_CORK to defer packet transmission until both the header and the payload have been written. Suggested-by: Nick Thomas <nick@bytemark.co.uk> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> --- block/nbd.c | 22 ++++++++++++++++------ 1 file changed, 16 insertions(+), 6 deletions(-) diff --git a/block/nbd.c b/block/nbd.c index 662df16..485bbf0 100644 --- a/block/nbd.c +++ b/block/nbd.c @@ -334,13 +334,23 @@ static int nbd_co_send_request(BDRVNBDState *s, struct nbd_request *request, s->send_coroutine = qemu_coroutine_self(); qemu_aio_set_fd_handler(s->sock, nbd_reply_ready, nbd_restart_write, nbd_have_request, s); - rc = nbd_send_request(s->sock, request); - if (rc >= 0 && qiov) { - ret = qemu_co_sendv(s->sock, qiov->iov, qiov->niov, - offset, request->len); - if (ret != request->len) { - rc = -EIO; + if (qiov) { + if (!s->is_unix) { + socket_set_cork(s->sock, 1); } + rc = nbd_send_request(s->sock, request); + if (rc >= 0) { + ret = qemu_co_sendv(s->sock, qiov->iov, qiov->niov, + offset, request->len); + if (ret != request->len) { + rc = -EIO; + } + } + if (!s->is_unix) { + socket_set_cork(s->sock, 0); + } + } else { + rc = nbd_send_request(s->sock, request); } qemu_aio_set_fd_handler(s->sock, nbd_reply_ready, NULL, nbd_have_request, s); -- 1.8.1.4 ^ permalink raw reply related [flat|nested] 5+ messages in thread
end of thread, other threads:[~2013-04-19 14:30 UTC | newest] Thread overview: 5+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- 2013-04-19 14:30 [Qemu-devel] [PULL 0/3] NBD changes for 2013-04-19 Paolo Bonzini 2013-04-19 14:30 ` [Qemu-devel] [PATCH 1/3] nbd: unlock mutex in nbd_co_send_request() error path Paolo Bonzini 2013-04-19 14:30 ` [Qemu-devel] [PATCH 2/3] nbd: use TCP_CORK in nbd_co_send_request() Paolo Bonzini 2013-04-19 14:30 ` [Qemu-devel] [PATCH 3/3] nbd: set TCP_NODELAY Paolo Bonzini -- strict thread matches above, loose matches on Subject: below -- 2013-04-15 14:14 [Qemu-devel] [PATCH 0/3] nbd: use TCP_NODELAY Stefan Hajnoczi 2013-04-15 14:14 ` [Qemu-devel] [PATCH 2/3] nbd: use TCP_CORK in nbd_co_send_request() Stefan Hajnoczi
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).