qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: Kevin Wolf <kwolf@redhat.com>
To: Peter Crosthwaite <peter.crosthwaite@petalogix.com>
Cc: Peter Maydell <peter.maydell@linaro.org>,
	stefanha@linux.vnet.ibm.com, Stefan Hajnoczi <stefanha@gmail.com>,
	qemu-devel@nongnu.org, Markus Armbruster <armbru@redhat.com>,
	edgar.iglesias@gmail.com, john.williams@petalogix.com
Subject: Re: [Qemu-devel] [RFC] block: Removed coroutine ownership assumption
Date: Mon, 02 Jul 2012 11:04:07 +0200	[thread overview]
Message-ID: <4FF16407.6030703@redhat.com> (raw)
In-Reply-To: <CAEgOgz6E4+0gKh6Fx-FOb3sd=nQLszbfWYpSX8HDvH6-c2Y98A@mail.gmail.com>

Am 02.07.2012 10:57, schrieb Peter Crosthwaite:
> On Mon, Jul 2, 2012 at 6:50 PM, Stefan Hajnoczi <stefanha@gmail.com> wrote:
>> On Fri, Jun 29, 2012 at 12:51 PM, Peter Crosthwaite
>> <peter.crosthwaite@petalogix.com> wrote:
>>> BTW Yielding is one thing, but the elephant in the room here is
>>> resumption of the coroutine. When AIO yields my coroutine I i need to
>>> talk to AIO to get it unyielded (Stefans propsoed edit to my code).
>>> What happens when tommorow something in QOM, or a device model or  is
>>> implemented with coros too? how do I know who yielded my routines and
>>> what API to call re-enter it?
>>
>> Going back to what Kevin said, the qemu_aio_wait() isn't block layer
>> specific and there will never be a requirement to call any other magic
>> functions.
>>
>> QEMU is event-driven and you must pump events.  If you call into
>> another subsystem, be prepared to pump events so that I/O can make
>> progress.  It's an assumption that is so central to QEMU architecture
>> that I don't see it as a problem.
>>
>> Once the main loop is running then the event loop is taken care of for
>> you.  But during startup you're in a different environment and need to
>> pump events yourself.
>>
>> If we drop bdrv_read()/bdrv_write() this won't change.  You'll have to
>> call bdrv_co_readv()/bdrv_co_writev() and pump events.
>>
> 
> Just tracing bdrv_aio_read(), It bypasses the fastpath logic:
> 
> . So a conversion of pflash to bdrv_aio_readv is a possible solution here.
> 
> bdrv_aio_read -> bdrv_co_aio_rw_vector :
> 
> static BlockDriverAIOCB *bdrv_co_aio_rw_vector (..) {
>     Coroutine *co;
>     BlockDriverAIOCBCoroutine *acb;
> 
>     ...
> 
>     co = qemu_coroutine_create(bdrv_co_do_rw);
>     qemu_coroutine_enter(co, acb);
> 
>     return &acb->common;
> }
> 
> No conditional on the qemu_coroutine_create. So it will always create
> a new coroutine for its work which will solve my problem. All I need
> to do is pump events once at the end of machine model creation. Any my
> coroutines will never yield or get queued by block/AIO. Sound like a
> solution?

If you don't need the read data in your initialisation code, then yes,
that would work. bdrv_aio_* will always create a new coroutine. I just
assumed that you wanted to use the data right away, and then using the
AIO functions wouldn't have made much sense.

Kevin

  reply	other threads:[~2012-07-02  9:04 UTC|newest]

Thread overview: 38+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2012-06-22  6:44 [Qemu-devel] [RFC] block: Removed coroutine ownership assumption Peter A. G. Crosthwaite
2012-06-22  7:49 ` Kevin Wolf
2012-06-22  8:20   ` Peter Crosthwaite
2012-06-22  8:31     ` Peter Maydell
2012-06-22  8:34       ` Stefan Hajnoczi
2012-06-22  8:53     ` Kevin Wolf
2012-06-22 10:59       ` Peter Crosthwaite
     [not found]         ` <CAEgOgz4+5FsFUZT796chTADOXcRps0+74T4whfmdEsh_dO96VA@mail.gmail.com>
     [not found]           ` <CAJSP0QWVJTb9jPZC_mdWpd4OwLz4VOuxGBZ_=2M4HNeexEC96Q@mail.gmail.com>
     [not found]             ` <CAFEAcA_cX_jxZMSjjT=yBA1Hmf2VpWbGyDBZ8z9mqq5rVNsWWw@mail.gmail.com>
     [not found]               ` <CAJSP0QV=BsRhdD_tVE9Cav-bhGiF-R3+Ab2aTtun6nSoPh0EmQ@mail.gmail.com>
     [not found]                 ` <m3vcidw3v3.fsf@blackfin.pond.sub.org>
     [not found]                   ` <CAEgOgz6Mai7PvBkHwN0EjhFsAFMU8W=V1B1X0ZLN3c1YHRaWKA@mail.gmail.com>
     [not found]                     ` <CAJSP0QWJcuEOmxhoAceqU2WYQkGT+fsizf-kdx_irq97j3pw4Q@mail.gmail.com>
     [not found]                       ` <CAEgOgz51jauHzvEk0Pk+oNQ0qPekjKatvNivv4vxQsQR_6nOVQ@mail.gmail.com>
     [not found]                         ` <CAJSP0QXnaw2HV7M+U=0S-ApGGnrGtt1w0P5w5gpmv7h-7bMs9g@mail.gmail.com>
     [not found]                           ` <CAEgOgz65h2baE0ufvSgfow-B5fGVwJKgNwsf3C2r65HNGdiQxg@mail.gmail.com>
     [not found]                             ` <4FED6638.90703@redhat.com>
     [not found]                               ` <CAEgOgz6sUREnwNuiSM=344ZTNq_4xMJDFU29Sn+6dTeVww4rhA@mail.gmail.com>
     [not found]                                 ` <m3y5n61hl5.fsf@blackfin.pond.sub.org>
     [not found]                                   ` <CAEgOgz7oPPsMexuzsYwc2LOGGOC4EM9NvHjXAp0TT2T8kOyirQ@mail.gmail.com>
2012-07-02  8:50                                     ` Stefan Hajnoczi
2012-07-02  8:57                                       ` Peter Crosthwaite
2012-07-02  9:04                                         ` Kevin Wolf [this message]
2012-07-02  9:42                                           ` Peter Crosthwaite
2012-07-02 10:01                                             ` Kevin Wolf
2012-07-02 10:18                                               ` Peter Maydell
2012-07-02 10:44                                                 ` Kevin Wolf
2012-07-02 10:59                                                   ` Peter Maydell
2012-07-02 11:03                                                     ` Peter Crosthwaite
2012-07-02 11:12                                                   ` Peter Crosthwaite
2012-07-02 11:19                                                     ` Kevin Wolf
2012-07-02 11:25                                                       ` Peter Crosthwaite
2012-07-02 10:18                                               ` Peter Crosthwaite
2012-07-02 10:52                                                 ` Kevin Wolf
2012-07-02 10:57                                                   ` Peter Crosthwaite
2012-07-02 11:04                                                     ` Kevin Wolf
2012-07-12 13:42                                               ` Markus Armbruster
2012-07-13  1:21                                                 ` Peter Crosthwaite
2012-07-13  8:33                                                   ` Markus Armbruster
2012-06-22  7:50 ` Jan Kiszka
2012-06-22  8:00   ` Peter Crosthwaite
2012-06-22  8:06     ` Kevin Wolf
2012-06-22  8:16     ` Peter Maydell
2012-06-22  8:23       ` Peter Crosthwaite
2012-06-22  8:33       ` Stefan Hajnoczi
2012-06-22  8:45       ` Kevin Wolf
2012-06-22  8:48       ` Markus Armbruster
2012-06-22  9:06         ` Peter Maydell
2012-06-22 12:04           ` Markus Armbruster
2012-06-22 12:30             ` Peter Maydell
2012-06-22 13:36               ` Markus Armbruster

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=4FF16407.6030703@redhat.com \
    --to=kwolf@redhat.com \
    --cc=armbru@redhat.com \
    --cc=edgar.iglesias@gmail.com \
    --cc=john.williams@petalogix.com \
    --cc=peter.crosthwaite@petalogix.com \
    --cc=peter.maydell@linaro.org \
    --cc=qemu-devel@nongnu.org \
    --cc=stefanha@gmail.com \
    --cc=stefanha@linux.vnet.ibm.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).