From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:39729) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1ZaMnG-0007uz-Iv for qemu-devel@nongnu.org; Fri, 11 Sep 2015 07:47:03 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1ZaMnF-0008NF-HJ for qemu-devel@nongnu.org; Fri, 11 Sep 2015 07:47:02 -0400 Date: Fri, 11 Sep 2015 19:46:53 +0800 From: Fam Zheng Message-ID: <20150911114653.GA11386@ad.nay.redhat.com> References: <1438144934-23619-1-git-send-email-famz@redhat.com> <20150911103926.GA5164@noname.redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20150911103926.GA5164@noname.redhat.com> Subject: Re: [Qemu-devel] [PATCH v2 00/11] aio: Introduce handler type to fix nested aio_poll for dataplane List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Kevin Wolf Cc: pbonzini@redhat.com, qemu-block@nongnu.org, qemu-devel@nongnu.org, stefanha@redhat.com On Fri, 09/11 12:39, Kevin Wolf wrote: > Am 29.07.2015 um 06:42 hat Fam Zheng geschrieben: > > v2: Switch to disable/enable model. [Paolo] > > > > Most existing nested aio_poll()'s in block layer are inconsiderate of > > dispatching potential new r/w requests from ioeventfds and nbd exports, which > > might result in responsiveness issues (e.g. bdrv_drain_all will not return when > > new requests keep coming), or even wrong semantics (e.g. qmp_transaction cannot > > enforce atomicity due to aio_poll in bdrv_drain_all). > > > > Previous attampts to address this issue include new op blocker[1], bdrv_lock[2] > > and nested AioContext (patches not posted to qemu-devel). > > > > This approach is based on the idea proposed by Paolo Bonzini. The original idea > > is introducing "aio_context_disable_client / aio_context_enable_client to > > filter AioContext handlers according to the "client", e.g. > > AIO_CLIENT_DATAPLANE (ioeventfd), AIO_CLIENT_PROTOCOL, AIO_CLIENT_NBD_SERVER, > > AIO_CLIENT_CONTEXT, ... Extend aio_set_{event_notifier,fd}_handler to pass a > > client (type)." > > > > After this series, block layer aio_poll() will only process those "protocol" > > fds that are used in block I/O, plus the ctx->notifier for aio_notify(); other > > aio_poll()'s keep unchanged. > > > > The biggest advantage over approaches [1] and [2] is, no change is needed in > > virtio-{blk,scsi}-dataplane code, also this doesn't depend on converting QMP to > > coroutines. > > It seems that I haven't replied on the mailing list yet, even though I > think I already said this in person at KVM Forum: This series fixes only > a special case of the real problem, which is that bdrv_drain/all at a > single point doesn't make a lot of sense, but needs to be replaced by a > whole section with exclusive access, like a bdrv_drained_begin/end pair. > > To be clear: Anything that works with types of users instead of > individual users is bound to fall short of being a complete solution. I > don't prefer partial solutions when we know there is a bigger problem. > > This series addresses your immediate need of protecting against new data > plane requests, which it arguably achieves. The second case I always > have in mind is Berto's case where he has multiple streaming block jobs > in the same backing file chain [1]. > > This involves a bdrv_reopen() of the target BDS to make it writable, and > bdrv_reopen() uses bdrv_drain_all() so drivers don't have to cope with > running requests while reopening themselves. It can however involve > nested event loops for synchronous operations like bdrv_flush(), and if > those can process completions of block jobs, which can respond by doing > anything to the respective node, things can go wrong. Just to get a better idea of bdrv_drained_begin/end, could you explain how to use the pair to fix the above problem? > > You don't solve this by adding client types (then problematic request > would be PROTOCOL in your proposal and you can never exclude that), but > you really need to have bdrv_drained_being/end pairs, where only > requests issued in between are processed and everything else waits. What do you mean by "only requests issued in between are processed"? Where are the requests from? Fam