* [Qemu-devel] [PATCH] sheepdog: set io_flush handler in do_co_req
@ 2013-03-11 9:01 MORITA Kazutaka
2013-03-11 14:39 ` Stefan Hajnoczi
0 siblings, 1 reply; 3+ messages in thread
From: MORITA Kazutaka @ 2013-03-11 9:01 UTC (permalink / raw)
To: kwolf, stefanha; +Cc: qemu-devel
If an io_flush handler is not set, qemu_aio_wait doesn't invoke
callbacks.
Signed-off-by: MORITA Kazutaka <morita.kazutaka@lab.ntt.co.jp>
---
block/sheepdog.c | 11 +++++++++--
1 file changed, 9 insertions(+), 2 deletions(-)
diff --git a/block/sheepdog.c b/block/sheepdog.c
index e4ec32d..cb0eeed 100644
--- a/block/sheepdog.c
+++ b/block/sheepdog.c
@@ -501,6 +501,13 @@ static void restart_co_req(void *opaque)
qemu_coroutine_enter(co, NULL);
}
+static int have_co_req(void *opaque)
+{
+ /* this handler is set only when there is a pending request, so
+ * always returns 1. */
+ return 1;
+}
+
typedef struct SheepdogReqCo {
int sockfd;
SheepdogReq *hdr;
@@ -523,14 +530,14 @@ static coroutine_fn void do_co_req(void *opaque)
unsigned int *rlen = srco->rlen;
co = qemu_coroutine_self();
- qemu_aio_set_fd_handler(sockfd, NULL, restart_co_req, NULL, co);
+ qemu_aio_set_fd_handler(sockfd, NULL, restart_co_req, have_co_req, co);
ret = send_co_req(sockfd, hdr, data, wlen);
if (ret < 0) {
goto out;
}
- qemu_aio_set_fd_handler(sockfd, restart_co_req, NULL, NULL, co);
+ qemu_aio_set_fd_handler(sockfd, restart_co_req, NULL, have_co_req, co);
ret = qemu_co_recv(sockfd, hdr, sizeof(*hdr));
if (ret < sizeof(*hdr)) {
--
1.8.1.3.566.gaa39828
^ permalink raw reply related [flat|nested] 3+ messages in thread
* Re: [Qemu-devel] [PATCH] sheepdog: set io_flush handler in do_co_req
2013-03-11 9:01 [Qemu-devel] [PATCH] sheepdog: set io_flush handler in do_co_req MORITA Kazutaka
@ 2013-03-11 14:39 ` Stefan Hajnoczi
2013-03-11 15:21 ` MORITA Kazutaka
0 siblings, 1 reply; 3+ messages in thread
From: Stefan Hajnoczi @ 2013-03-11 14:39 UTC (permalink / raw)
To: MORITA Kazutaka; +Cc: kwolf, qemu-devel
On Mon, Mar 11, 2013 at 06:01:02PM +0900, MORITA Kazutaka wrote:
> If an io_flush handler is not set, qemu_aio_wait doesn't invoke
> callbacks.
>
> Signed-off-by: MORITA Kazutaka <morita.kazutaka@lab.ntt.co.jp>
> ---
> block/sheepdog.c | 11 +++++++++--
> 1 file changed, 9 insertions(+), 2 deletions(-)
>
> diff --git a/block/sheepdog.c b/block/sheepdog.c
> index e4ec32d..cb0eeed 100644
> --- a/block/sheepdog.c
> +++ b/block/sheepdog.c
> @@ -501,6 +501,13 @@ static void restart_co_req(void *opaque)
> qemu_coroutine_enter(co, NULL);
> }
>
> +static int have_co_req(void *opaque)
> +{
> + /* this handler is set only when there is a pending request, so
> + * always returns 1. */
> + return 1;
> +}
> +
> typedef struct SheepdogReqCo {
> int sockfd;
> SheepdogReq *hdr;
> @@ -523,14 +530,14 @@ static coroutine_fn void do_co_req(void *opaque)
> unsigned int *rlen = srco->rlen;
>
> co = qemu_coroutine_self();
> - qemu_aio_set_fd_handler(sockfd, NULL, restart_co_req, NULL, co);
> + qemu_aio_set_fd_handler(sockfd, NULL, restart_co_req, have_co_req, co);
>
> ret = send_co_req(sockfd, hdr, data, wlen);
Which tree is this patch against? block/sheepdog.c:do_co_req() has a
socket_set_block(sockfd) call before this line.
Is there a guarantee that only one coroutine executes do_co_req() at a
time? Otherwise the first coroutine that finishes the function sets
io_flush to NULL even though there's still another request processing.
Stefan
^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: [Qemu-devel] [PATCH] sheepdog: set io_flush handler in do_co_req
2013-03-11 14:39 ` Stefan Hajnoczi
@ 2013-03-11 15:21 ` MORITA Kazutaka
0 siblings, 0 replies; 3+ messages in thread
From: MORITA Kazutaka @ 2013-03-11 15:21 UTC (permalink / raw)
To: Stefan Hajnoczi; +Cc: kwolf, qemu-devel, MORITA Kazutaka
At Mon, 11 Mar 2013 15:39:05 +0100,
Stefan Hajnoczi wrote:
>
> On Mon, Mar 11, 2013 at 06:01:02PM +0900, MORITA Kazutaka wrote:
> > If an io_flush handler is not set, qemu_aio_wait doesn't invoke
> > callbacks.
> >
> > Signed-off-by: MORITA Kazutaka <morita.kazutaka@lab.ntt.co.jp>
> > ---
> > block/sheepdog.c | 11 +++++++++--
> > 1 file changed, 9 insertions(+), 2 deletions(-)
> >
> > diff --git a/block/sheepdog.c b/block/sheepdog.c
> > index e4ec32d..cb0eeed 100644
> > --- a/block/sheepdog.c
> > +++ b/block/sheepdog.c
> > @@ -501,6 +501,13 @@ static void restart_co_req(void *opaque)
> > qemu_coroutine_enter(co, NULL);
> > }
> >
> > +static int have_co_req(void *opaque)
> > +{
> > + /* this handler is set only when there is a pending request, so
> > + * always returns 1. */
> > + return 1;
> > +}
> > +
> > typedef struct SheepdogReqCo {
> > int sockfd;
> > SheepdogReq *hdr;
> > @@ -523,14 +530,14 @@ static coroutine_fn void do_co_req(void *opaque)
> > unsigned int *rlen = srco->rlen;
> >
> > co = qemu_coroutine_self();
> > - qemu_aio_set_fd_handler(sockfd, NULL, restart_co_req, NULL, co);
> > + qemu_aio_set_fd_handler(sockfd, NULL, restart_co_req, have_co_req, co);
> >
> > ret = send_co_req(sockfd, hdr, data, wlen);
>
> Which tree is this patch against? block/sheepdog.c:do_co_req() has a
> socket_set_block(sockfd) call before this line.
Sorry, I have another patch in my local tree that needed to be sent
before this one. I'll send v2 including the missing patch.
>
> Is there a guarantee that only one coroutine executes do_co_req() at a
> time? Otherwise the first coroutine that finishes the function sets
> io_flush to NULL even though there's still another request processing.
Yes - the sockfd is opened just before calling do_req() and is closed
just after returning the function, so the sheepdog driver cannot call
multiple do_co_req() at a time against the same socket descriptor.
Thanks,
Kazutaka
^ permalink raw reply [flat|nested] 3+ messages in thread
end of thread, other threads:[~2013-03-11 15:21 UTC | newest]
Thread overview: 3+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2013-03-11 9:01 [Qemu-devel] [PATCH] sheepdog: set io_flush handler in do_co_req MORITA Kazutaka
2013-03-11 14:39 ` Stefan Hajnoczi
2013-03-11 15:21 ` MORITA Kazutaka
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).