From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([208.118.235.92]:43615) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1T9WQy-0006Eu-Ex for qemu-devel@nongnu.org; Thu, 06 Sep 2012 03:23:29 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1T9WQu-0001Uf-ID for qemu-devel@nongnu.org; Thu, 06 Sep 2012 03:23:28 -0400 Received: from mail-pb0-f45.google.com ([209.85.160.45]:41673) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1T9WQu-0001UX-CK for qemu-devel@nongnu.org; Thu, 06 Sep 2012 03:23:24 -0400 Received: by pbbjt11 with SMTP id jt11so2181973pbb.4 for ; Thu, 06 Sep 2012 00:23:23 -0700 (PDT) Sender: Paolo Bonzini Message-ID: <50484F63.5050406@redhat.com> Date: Thu, 06 Sep 2012 09:23:15 +0200 From: Paolo Bonzini MIME-Version: 1.0 References: <20120809130010.GA7960@in.ibm.com> <20120809130216.GC7960@in.ibm.com> <20120905074106.GA28080@in.ibm.com> <20120905095431.GB28080@in.ibm.com> In-Reply-To: <20120905095431.GB28080@in.ibm.com> Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit Subject: Re: [Qemu-devel] [PATCH v6 2/2] block: Support GlusterFS as a QEMU block backend List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: bharata@linux.vnet.ibm.com Cc: Kevin Wolf , Anthony Liguori , Anand Avati , Stefan Hajnoczi , Vijay Bellur , Amar Tumballi , qemu-devel@nongnu.org, Blue Swirl Il 05/09/2012 11:57, Bharata B Rao ha scritto: >> > What could be the issue here ? In general, how do I ensure that my >> > aio calls get completed correctly in such scenarios where bdrv_read etc >> > are called from coroutine context rather than from main thread context ? > One way to handle this is not to do completion from gluster thread but > instead schedule a BH that does the completion. In fact I had this approach > in the earlier versions, but resorted to directly calling completion from > gluster thread as I didn't see the value of using a BH for completion. > But I guess its indeed needed to support such scenarios (qcow2 image creation > on gluster backend). I think the problem is that we're calling bdrv_drain_all() from coroutine context. This is problematic because then the current coroutine won't yield and won't give other coroutines an occasion to run. This could be fixed by checking whether we're in coroutine context in bdrv_drain_all(). If so, instead of draining the queues directly, schedule a bottom half that does bdrv_drain_all() followed by qemu_coroutine_enter(), and yield. If it works, I think this change would be preferrable to using a "magic" BH in every driver. Paolo