From: Paolo Bonzini <pbonzini@redhat.com>
To: Stefan Hajnoczi <stefanha@redhat.com>, Fam Zheng <famz@redhat.com>
Cc: Kevin Wolf <kwolf@redhat.com>,
qemu-devel@nongnu.org, qemu-block@nongnu.org
Subject: Re: [Qemu-devel] [PATCH RFC 4/4] aio-posix: Use epoll in aio_poll
Date: Tue, 7 Jul 2015 17:27:16 +0200 [thread overview]
Message-ID: <559BEFD4.7050103@redhat.com> (raw)
In-Reply-To: <20150707150850.GG28673@stefanha-thinkpad.redhat.com>
On 07/07/2015 17:08, Stefan Hajnoczi wrote:
>> > +
>> > +#define EPOLL_BATCH 128
>> > +static bool aio_poll_epoll(AioContext *ctx, bool blocking)
>> > +{
>> > + AioHandler *node;
>> > + bool was_dispatching;
>> > + int i, ret;
>> > + bool progress;
>> > + int64_t timeout;
>> > + struct epoll_event events[EPOLL_BATCH];
>> > +
>> > + aio_context_acquire(ctx);
>> > + was_dispatching = ctx->dispatching;
>> > + progress = false;
>> > +
>> > + /* aio_notify can avoid the expensive event_notifier_set if
>> > + * everything (file descriptors, bottom halves, timers) will
>> > + * be re-evaluated before the next blocking poll(). This is
>> > + * already true when aio_poll is called with blocking == false;
>> > + * if blocking == true, it is only true after poll() returns.
>> > + *
>> > + * If we're in a nested event loop, ctx->dispatching might be true.
>> > + * In that case we can restore it just before returning, but we
>> > + * have to clear it now.
>> > + */
>> > + aio_set_dispatching(ctx, !blocking);
>> > +
>> > + ctx->walking_handlers++;
>> > +
>> > + timeout = blocking ? aio_compute_timeout(ctx) : 0;
>> > +
>> > + if (timeout > 0) {
>> > + timeout = DIV_ROUND_UP(timeout, 1000000);
>> > + }
> I think you already posted the timerfd code in an earlier series. Why
> degrade to millisecond precision? It needs to be fixed up anyway if the
> main loop uses aio_poll() in the future.
BTW, what about putting the code in a separate aio-epoll.c file?
Paolo
next prev parent reply other threads:[~2015-07-07 16:22 UTC|newest]
Thread overview: 20+ messages / expand[flat|nested] mbox.gz Atom feed top
2015-06-30 13:19 [Qemu-devel] [PATCH RFC 0/4] aio: Use epoll_wait in aio_poll Fam Zheng
2015-06-30 13:19 ` [Qemu-devel] [PATCH RFC 1/4] aio: Introduce aio_set_fd_handler_pri Fam Zheng
2015-07-07 14:29 ` Stefan Hajnoczi
2015-07-08 1:07 ` Fam Zheng
2015-06-30 13:19 ` [Qemu-devel] [PATCH RFC 2/4] aio: Move aio_set_fd_handler to async.c Fam Zheng
2015-07-07 14:30 ` Stefan Hajnoczi
2015-06-30 13:19 ` [Qemu-devel] [PATCH RFC 3/4] aio: Introduce aio_context_setup Fam Zheng
2015-07-07 14:35 ` Stefan Hajnoczi
2015-07-08 1:15 ` Fam Zheng
2015-07-08 10:51 ` Stefan Hajnoczi
2015-06-30 13:19 ` [Qemu-devel] [PATCH RFC 4/4] aio-posix: Use epoll in aio_poll Fam Zheng
2015-07-07 15:08 ` Stefan Hajnoczi
2015-07-07 15:27 ` Paolo Bonzini [this message]
2015-07-08 1:01 ` Fam Zheng
2015-07-08 10:58 ` Stefan Hajnoczi
2015-07-10 0:46 ` Fam Zheng
2015-07-13 10:02 ` Stefan Hajnoczi
2015-07-07 14:54 ` [Qemu-devel] [PATCH RFC 0/4] aio: Use epoll_wait " Christian Borntraeger
2015-07-08 1:02 ` Fam Zheng
2015-07-08 7:59 ` Christian Borntraeger
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=559BEFD4.7050103@redhat.com \
--to=pbonzini@redhat.com \
--cc=famz@redhat.com \
--cc=kwolf@redhat.com \
--cc=qemu-block@nongnu.org \
--cc=qemu-devel@nongnu.org \
--cc=stefanha@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).