From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([208.118.235.92]:51937) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1UQDfE-0003mr-1n for qemu-devel@nongnu.org; Thu, 11 Apr 2013 05:19:30 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1UQDfC-0004bj-HH for qemu-devel@nongnu.org; Thu, 11 Apr 2013 05:19:27 -0400 Received: from mail-ee0-f44.google.com ([74.125.83.44]:45615) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1UQDfC-0004bX-A3 for qemu-devel@nongnu.org; Thu, 11 Apr 2013 05:19:26 -0400 Received: by mail-ee0-f44.google.com with SMTP id c41so638371eek.17 for ; Thu, 11 Apr 2013 02:19:25 -0700 (PDT) Date: Thu, 11 Apr 2013 11:19:22 +0200 From: Stefan Hajnoczi Message-ID: <20130411091922.GD8904@stefanha-thinkpad.redhat.com> References: <1364457355-4119-1-git-send-email-qemulist@gmail.com> <51540287.5030202@redhat.com> <20130328134049.GG22865@stefanha-thinkpad.redhat.com> <20130408114616.GF12852@stefanha-thinkpad.redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: Subject: Re: [Qemu-devel] [RFC PATCH v2 0/4] port network layer onto glib List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: liu ping fan Cc: pbonzini@redhat.com, Anthony Liguori , qemu-devel@nongnu.org, mdroth On Tue, Apr 09, 2013 at 01:10:29PM +0800, liu ping fan wrote: > On Mon, Apr 8, 2013 at 7:46 PM, Stefan Hajnoczi wrote: > > On Tue, Apr 02, 2013 at 05:49:57PM +0800, liu ping fan wrote: > >> On Thu, Mar 28, 2013 at 9:40 PM, Stefan Hajnoczi wrote: > >> > On Thu, Mar 28, 2013 at 09:42:47AM +0100, Paolo Bonzini wrote: > >> >> Il 28/03/2013 08:55, Liu Ping Fan ha scritto: > >> >> > 3rd. block layer's AioContext will block other AioContexts on the same thread. > >> >> > >> >> I cannot understand this. > >> > > >> > The plan is for BlockDriverState to be bound to an AioContext. That > >> > means each thread is set up with one AioContext. BlockDriverStates that > >> > are used in that thread will first be bound to its AioContext. > >> > > >> > It's not very useful to have multiple AioContext in the same thread. > >> > > >> But it can be the case that we detach and re-attach the different > >> device( AioContext) to the same thread. I think that the design of > >> io_flush is to sync, but for NetClientState, we need something else. > >> So if we use AioContext, is it proper to extend readable/writeable > >> interface for qemu_aio_set_fd_handler()? > > > > Devices don't have AioContexts, threads do. When you bind a device to > > an AioContext the AioContext already exists independent of the device. > > > Oh, yes. So let me say in this way, switch the devices among > different thread. Then if NetClientState happens to exist on the same > thread with BlockDriverState, it will not be responsive until the > BlockDriverState has finished the flying job. It's partially true that devices sharing an event loop may be less responsive. That's why we have the option of a 1:1 device-to-thread mapping. But remember that QEMU code is (meant to be) designed for event loops. Therefore, it must not block and should return back to the event loop as quickly as possible. So a block and net device in the same event loop shouldn't inconvenience each other dramatically if the device-to-thread mappings are reasonable given the host machine, workload, etc. > > Unfortunately I don't understand your question about io_flush and > > readable/writeable qemu_aio_set_fd_handler(). > > > As for readable/writable, I mean something like IOCanReadHandler. If > NetClientState is unreadable, it just does not poll for G_IO_IN event, > but not blocks. But as for io_flush, it will block for pending AIO > operations. These behaviors are different, so I suggest to expand > readable/writeable for qemu_aio_set_fd_handler() I see, thanks for explaining. In another thread Kevin suggested a solution: Basically, io_flush() and qemu_aio_wait() should be removed. Instead we'll push the synchronous wait into the block layer, which is the only user. We can do that by introducing a .bdrv_drain() function which is similar to io_flush(). Now the qemu_drain_all() which uses qemu_aio_wait() can change to calling .bdrv_drain() and then executing an event loop iteration. In other words, the event loop shouldn't know about io_flush(). I will try to send patches for this today. Stefan