qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: "Pavel Dovgalyuk" <dovgaluk@ispras.ru>
To: 'Kevin Wolf' <kwolf@redhat.com>
Cc: edgar.iglesias@xilinx.com, peter.maydell@linaro.org,
	igor.rubinov@gmail.com, mark.burton@greensocs.com,
	real@ispras.ru, hines@cert.org, qemu-devel@nongnu.org,
	maria.klimushenkova@ispras.ru, stefanha@redhat.com,
	pbonzini@redhat.com, batuzovk@ispras.ru, alex.bennee@linaro.org,
	fred.konrad@greensocs.com
Subject: Re: [Qemu-devel] [PATCH 3/3] replay: introduce block devices record/replay
Date: Thu, 25 Feb 2016 12:06:41 +0300	[thread overview]
Message-ID: <000c01d16fab$d85e3b40$891ab1c0$@ru> (raw)
In-Reply-To: <20160224131407.GF4485@noname.redhat.com>

> From: Kevin Wolf [mailto:kwolf@redhat.com]
> > > Coroutines aren't randomly assigned to threads, but threads actively
> > > enter coroutines. To my knowledge this happens only when starting a
> > > request (either vcpu or I/O thread; consistent per device) or by a
> > > callback when some event happens (only I/O thread). I can't see any
> > > non-determinism here.
> >
> > Behavior of coroutines looks strange for me.
> > Consider the code below (co_readv function of the replay driver).
> > In record mode it somehow changes the thread it assigned to.
> > Code in point A is executed in CPU thread and code in point B - in some other thread.
> > May this happen because this coroutine yields somewhere and its execution is restored
> > by aio_poll, which is called from iothread?
> > In this case event finishing callback cannot be executed deterministically
> > (always in CPU thread or always in IO thread).
> >
> > static int coroutine_fn blkreplay_co_readv(BlockDriverState *bs,
> >     int64_t sector_num, int nb_sectors, QEMUIOVector *qiov)
> > {
> >     BDRVBlkreplayState *s = bs->opaque;
> >     uint32_t reqid = request_id++;
> >     Request *req;
> > // A
> >     bdrv_co_readv(bs->file->bs, sector_num, nb_sectors, qiov);
> >
> >     if (replay_mode == REPLAY_MODE_RECORD) {
> >         replay_save_block_event(reqid);
> >     } else {
> >         assert(replay_mode == REPLAY_MODE_PLAY);
> >         if (reqid == current_request) {
> >             current_finished = true;
> >         } else {
> >             req = block_request_insert(reqid, bs, qemu_coroutine_self());
> >             qemu_coroutine_yield();
> >             block_request_remove(req);
> >         }
> >     }
> > // B
> >     return 0;
> > }
> 
> Yes, I guess this can happen. As I described above, the coroutine can be
> entered from a vcpu thread initially. After yielding for the first time,
> it is resumed from the I/O thread. So if there are paths where the
> coroutine never yields, the coroutine completes in the original vcpu
> thread. (It's not the common case that bdrv_co_readv() doesn't yield,
> but it happens e.g. with unallocated sectors in qcow2.)
> 
> If this is a problem for you, you need to force the coroutine into the
> I/O thread. You can do that by scheduling a BH, then yield, and then let
> the BH reenter the coroutine.

Thanks, this approach seems to work. I got rid of replay_run_block_event,
because BH basically does the same job.

There is one problem with flush event - callbacks for flush are called for
all layers and I couldn't synchronize them correctly yet.
I'll probably have to add new callback to block driver, which handles
flush request for the whole stack of the drivers.

Pavel Dovgalyuk

  reply	other threads:[~2016-02-25  9:06 UTC|newest]

Thread overview: 46+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-02-09  5:55 [Qemu-devel] [PATCH 0/3] Deterministic replay extensions Pavel Dovgalyuk
2016-02-09  5:55 ` [Qemu-devel] [PATCH 1/3] replay: character devices Pavel Dovgalyuk
2016-02-09  5:55 ` [Qemu-devel] [PATCH 2/3] replay: introduce new checkpoint for icount warp Pavel Dovgalyuk
2016-02-09  5:55 ` [Qemu-devel] [PATCH 3/3] replay: introduce block devices record/replay Pavel Dovgalyuk
2016-02-09 10:27   ` Kevin Wolf
2016-02-09 11:52     ` Pavel Dovgalyuk
2016-02-10 11:45       ` Kevin Wolf
2016-02-10 12:05         ` Pavel Dovgalyuk
2016-02-10 12:28           ` Kevin Wolf
2016-02-10 12:51             ` Pavel Dovgalyuk
2016-02-10 13:25               ` Kevin Wolf
2016-02-10 13:33                 ` Pavel Dovgalyuk
2016-02-10 13:52                   ` Kevin Wolf
2016-02-11  6:05                 ` Pavel Dovgalyuk
2016-02-11  9:43                   ` Kevin Wolf
2016-02-11 11:00                     ` Pavel Dovgalyuk
2016-02-11 12:18                       ` Kevin Wolf
2016-02-11 12:24                         ` Pavel Dovgalyuk
2016-02-12  8:33                         ` Pavel Dovgalyuk
2016-02-12  9:44                           ` Kevin Wolf
2016-02-12 13:19                 ` Pavel Dovgalyuk
2016-02-12 13:58                   ` Kevin Wolf
2016-02-15  8:38                     ` Pavel Dovgalyuk
2016-02-15  9:10                       ` Kevin Wolf
2016-02-15  9:14                       ` Pavel Dovgalyuk
2016-02-15  9:38                         ` Kevin Wolf
2016-02-15 11:19                           ` Pavel Dovgalyuk
2016-02-15 12:46                             ` Kevin Wolf
2016-02-15 13:54                           ` Pavel Dovgalyuk
2016-02-15 14:06                             ` Kevin Wolf
2016-02-15 14:24                               ` Pavel Dovgalyuk
2016-02-15 15:01                                 ` Kevin Wolf
2016-02-16  6:25                                   ` Pavel Dovgalyuk
2016-02-16 10:02                                     ` Kevin Wolf
2016-02-16 11:20                                       ` Pavel Dovgalyuk
2016-02-16 12:54                                         ` Kevin Wolf
2016-02-18  9:18                                           ` Pavel Dovgalyuk
2016-02-20  7:11                                           ` Pavel Dovgalyuk
2016-02-22 11:06                                             ` Kevin Wolf
2016-02-24 11:59                                               ` Pavel Dovgalyuk
2016-02-24 13:14                                                 ` Kevin Wolf
2016-02-25  9:06                                                   ` Pavel Dovgalyuk [this message]
2016-02-26  9:01                                                     ` Kevin Wolf
2016-02-29  7:03                                                       ` Pavel Dovgalyuk
2016-02-29  7:54                                                         ` Kevin Wolf
2016-02-15 14:50                               ` Pavel Dovgalyuk

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='000c01d16fab$d85e3b40$891ab1c0$@ru' \
    --to=dovgaluk@ispras.ru \
    --cc=alex.bennee@linaro.org \
    --cc=batuzovk@ispras.ru \
    --cc=edgar.iglesias@xilinx.com \
    --cc=fred.konrad@greensocs.com \
    --cc=hines@cert.org \
    --cc=igor.rubinov@gmail.com \
    --cc=kwolf@redhat.com \
    --cc=maria.klimushenkova@ispras.ru \
    --cc=mark.burton@greensocs.com \
    --cc=pbonzini@redhat.com \
    --cc=peter.maydell@linaro.org \
    --cc=qemu-devel@nongnu.org \
    --cc=real@ispras.ru \
    --cc=stefanha@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).