From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:34847) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1cD6N4-0006bb-Qv for qemu-devel@nongnu.org; Sat, 03 Dec 2016 04:12:39 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1cD6N1-0000s8-KU for qemu-devel@nongnu.org; Sat, 03 Dec 2016 04:12:38 -0500 Received: from mx5-phx2.redhat.com ([209.132.183.37]:44055) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1cD6N1-0000ru-Cf for qemu-devel@nongnu.org; Sat, 03 Dec 2016 04:12:35 -0500 Date: Sat, 3 Dec 2016 04:12:30 -0500 (EST) From: Paolo Bonzini Message-ID: <350060616.1292696.1480756350395.JavaMail.zimbra@redhat.com> In-Reply-To: <6f169bc2-257e-a627-8344-e5c37743ac69@virtuozzo.com> References: <1477565348-5458-1-git-send-email-pbonzini@redhat.com> <1477565348-5458-3-git-send-email-pbonzini@redhat.com> <6f169bc2-257e-a627-8344-e5c37743ac69@virtuozzo.com> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit Subject: Re: [Qemu-devel] [PATCH 02/20] blockjob: introduce .drain callback for jobs List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Vladimir Sementsov-Ogievskiy Cc: qemu-devel@nongnu.org, kwolf@redhat.com, famz@redhat.com, stefanha@redhat.com ----- Original Message ----- > From: "Vladimir Sementsov-Ogievskiy" > To: "Paolo Bonzini" , qemu-devel@nongnu.org > Cc: kwolf@redhat.com, famz@redhat.com, stefanha@redhat.com > Sent: Friday, December 2, 2016 3:01:30 PM > Subject: Re: [Qemu-devel] [PATCH 02/20] blockjob: introduce .drain callback for jobs > > 27.10.2016 13:48, Paolo Bonzini wrote: > > This is required to decouple block jobs from running in an > > AioContext. With multiqueue block devices, a BlockDriverState > > does not really belong to a single AioContext. > > > > The solution is to first wait until all I/O operations are > > complete; then loop in the main thread for the block job to > > complete entirely. > > Looks like I have a problem with this. block_job_drain enters the job > only if job.busy = false. But what if job yielded with busy = true? > > My case is the following: in the job I call co_aio_sleep_ns() for some > time without setting job.busy to false, and it looks like timer doesn't > work while we are in "while() { block_job_drain() }" loop. If I just set > "job.busy = false" and "job.busy = true" around co_aio_sleep_ns() all > start to work. > > I don't want set job.busy to false, because actually job is working - > several additional coroutines do their work, only the main one (job.co) > do nothing. I can remove timer, and make other coroutines wake up the > main one when it needed, and, anyway it looks like better way.. > > But the question is: is it ok, that we can't use sleep timer in the job, > without setting busy = true? Is it right that only io can wake up block > job coroutine, if it yielded without setting busy=false? That's more or less correct. See for example mirror.c. Whenever it yields with busy=false, mirror_write_complete will wake the coroutine. Note that it's also okay to use co_aio_sleep_ns if I/O and _also_ wake the coroutine on I/O. Reentering a coroutine automatically interrupts the sleep. Paolo > > + while (!job->deferred_to_main_loop && !job->completed) { > > + block_job_drain(job); > > + } > > while (!job->completed) { > > - aio_poll(block_job_get_aio_context(job), true); > > + aio_poll(qemu_get_aio_context(), true); > > } > > ret = (job->cancelled && job->ret == 0) ? -ECANCELED : job->ret; > > block_job_unref(job); > > > > > -- > Best regards, > Vladimir > >