From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:33994) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1aVf9i-0003mZ-Mm for qemu-devel@nongnu.org; Tue, 16 Feb 2016 07:55:08 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1aVf9e-0006Wg-IR for qemu-devel@nongnu.org; Tue, 16 Feb 2016 07:55:02 -0500 Received: from mx1.redhat.com ([209.132.183.28]:54549) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1aVf9e-0006Wc-Ac for qemu-devel@nongnu.org; Tue, 16 Feb 2016 07:54:58 -0500 Date: Tue, 16 Feb 2016 13:54:53 +0100 From: Kevin Wolf Message-ID: <20160216125453.GC4920@noname.str.redhat.com> References: <003301d167cc$4d7d9480$e878bd80$@ru> <003a01d167d1$42df95f0$c89ec1d0$@ru> <20160215093810.GC5244@noname.str.redhat.com> <004701d167f8$5cbe70f0$163b52d0$@ru> <20160215140635.GF5244@noname.str.redhat.com> <005501d167fc$8ed75030$ac85f090$@ru> <20160215150110.GG5244@noname.str.redhat.com> <000601d16882$c9637270$5c2a5750$@ru> <20160216100208.GA4920@noname.str.redhat.com> <000a01d168ac$09929500$1cb7bf00$@ru> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <000a01d168ac$09929500$1cb7bf00$@ru> Subject: Re: [Qemu-devel] [PATCH 3/3] replay: introduce block devices record/replay List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Pavel Dovgalyuk Cc: edgar.iglesias@xilinx.com, peter.maydell@linaro.org, igor.rubinov@gmail.com, mark.burton@greensocs.com, real@ispras.ru, hines@cert.org, qemu-devel@nongnu.org, maria.klimushenkova@ispras.ru, stefanha@redhat.com, pbonzini@redhat.com, batuzovk@ispras.ru, alex.bennee@linaro.org, fred.konrad@greensocs.com Am 16.02.2016 um 12:20 hat Pavel Dovgalyuk geschrieben: > > From: Kevin Wolf [mailto:kwolf@redhat.com] > > Am 16.02.2016 um 07:25 hat Pavel Dovgalyuk geschrieben: > > > > From: Kevin Wolf [mailto:kwolf@redhat.com] > > > > Am 15.02.2016 um 15:24 hat Pavel Dovgalyuk geschrieben: > > > > > > From: Kevin Wolf [mailto:kwolf@redhat.com] > > > > > > > Here is the current version of blkreplay functions: > > > > > > > > > > > > > > static int coroutine_fn blkreplay_co_readv(BlockDriverState *bs, > > > > > > > int64_t sector_num, int nb_sectors, QEMUIOVector *qiov) > > > > > > > { > > > > > > > uint32_t reqid = request_id++; > > > > > > > Request *req; > > > > > > > req = block_request_insert(reqid, bs, qemu_coroutine_self()); > > > > > > > bdrv_co_readv(bs->file->bs, sector_num, nb_sectors, qiov); > > > > > > > > > > > > > > if (replay_mode == REPLAY_MODE_RECORD) { > > > > > > > replay_save_block_event(reqid); > > > > > > > } else { > > > > > > > assert(replay_mode == REPLAY_MODE_PLAY); > > > > > > > qemu_coroutine_yield(); > > > > > > > } > > > > > > > block_request_remove(req); > > > > > > > > > > > > > > return 0; > > > > > > > } > > > > > > > > > > > > > > void replay_run_block_event(uint32_t id) > > > > > > > { > > > > > > > Request *req; > > > > > > > if (replay_mode == REPLAY_MODE_PLAY) { > > > > > > > while (!(req = block_request_find(id))) { > > > > > > > //aio_poll(bdrv_get_aio_context(req->bs), true); > > > > > > > usleep(1); > > > > > > > } > > > > > > > > > > > > How is this loop supposed to make any progress? > > > > > > > > > > This loop does not supposed to make any progress. It waits until block_request_insert > > > > > call is added to the queue. > > > > > > > > Yes. And who is supposed to add something to the queue? You are looping > > > > and doing nothing, so no other code gets to run in this thread. It's > > > > only aio_poll() that would run event handlers that could eventually add > > > > something to the list. > > > > > > blkreplay_co_readv adds request to the queue. > > > > I don't see blkreplay_co_readv() called in this loop without aio_poll(). > > Is this mandatory? If you don't want it to be an endless loop, someone needs to make the value of the loop condition change... > I thought that version which you proposed has a race condition at the following lines > when AIO request is operated by a worker thread. Is this correct? Unfortunately this ended up linewrapped, trying to restore what you meant: > Coroutine Replay > bool *done = req_replayed_list_get(reqid) // NULL > co = req_completed_list_get(e.reqid); // NULL There was no yield, this context switch is impossible to happen. Same for the switch back. > req_completed_list_insert(reqid, qemu_coroutine_self()); > qemu_coroutine_yield(); This is the point at which a context switch happens. The only other point in my code is the qemu_coroutine_enter() in the other function. > bool done = false; > req_replayed_list_insert(reqid, &done); > while (!done) { // nobody will change done anymore > aio_poll(); > } > > > > > You are right, but I found kind of fundamental problem here. > > > There are two possible approaches for replaying coroutine events: > > > > > > 1. Waiting with aio_poll in replay_run_block_event. > > > In this case replay cannot be made, because of recursive mutex lock: > > > aio_poll -> qemu_clock_get_ns -> -> > > > replay_run_block_event -> aio_poll -> qemu_clock_get_ns -> -> > > > > > > I.e. we have recursive aio_poll function calls that lead to recursive replay calls > > > and to recursive mutex lock. > > > > That's what you get for replaying stuff in qemu_clock_get_ns() when it > > should really be replayed in the hardware emulation. > > I'm not sure that it is possible for RTC. Host clock should run even if VM is stopped. > If we emulate host clock, its behavior should be somehow synchronized with the host. I think the RTC behaviour should be replayed, whereas the host clock should be the right one. Or maybe it would actually make most sense to forbid clock=host for record/replay, because in the replay you don't get the host clock anyway, but just saved values. > > This is a problem you would also have had with your original patch, as > > there are some places in the block layer that use aio_poll() internally. > > Using the blkreplay driver only made you find the bug earlier. > > Not exactly. Host clock replay will use cached clock value if there is no such event in the log. > The problem I observe here is with asynchronous block events that should be processed > immediately as they read from log. Doesn't your earlier patch do the same? I seem to remember that it calls bdrv_drain_all(), which is essentially the same as calling aio_poll() in a loop, except that its loop condition is slightly different. > > But... can you detail how we get from aio_poll() to qemu_clock_get_ns()? > > I don't see it directly calling that function, so it might just be a > > callback to some guest device that is running now. > > aio_poll (win32) has the following line: > > timeout = blocking && !have_select_revents > ? qemu_timeout_ns_to_ms(aio_compute_timeout(ctx)) : 0; > > POSIX version also uses that function. aio_compute_timeout() requests time from > all existing clocks by calling timerlistgroup_deadline_ns(). I see, thanks. Kevin