From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([208.118.235.92]:60000) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1SeROs-0003Dk-JD for qemu-devel@nongnu.org; Tue, 12 Jun 2012 09:44:56 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1SeROj-000621-SQ for qemu-devel@nongnu.org; Tue, 12 Jun 2012 09:44:50 -0400 Received: from mx1.redhat.com ([209.132.183.28]:23794) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1SeROj-00061a-Jw for qemu-devel@nongnu.org; Tue, 12 Jun 2012 09:44:41 -0400 Message-ID: <4FD747BF.3020809@redhat.com> Date: Tue, 12 Jun 2012 15:44:31 +0200 From: Kevin Wolf MIME-Version: 1.0 References: <20120513160331.5b774c93.yizhouzhou@ict.ac.cn> <4FB0F89F.6080306@redhat.com> <4FD74513.2000500@suse.de> In-Reply-To: <4FD74513.2000500@suse.de> Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable Subject: Re: [Qemu-devel] fixing qemu-0.1X endless loop in qcow2_alloc_cluster_offset List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: =?ISO-8859-1?Q?Andreas_F=E4rber?= Cc: Bruce Rogers , Michael Tokarev , Zhouyi Zhou , qemu-devel@nongnu.org Am 12.06.2012 15:33, schrieb Andreas F=E4rber: > Am 14.05.2012 14:20, schrieb Kevin Wolf: >> Am 13.05.2012 10:03, schrieb Zhouyi Zhou: >>> hi all >>> =20 >>> sometimes, qemu/kvm-0.1x will hang in endless loop in qcow2_alloc_c= luster_offset. >>> after some investigation, I found that: >>> in function posix_aio_process_queue(void *opaque) >>> 440 ret =3D qemu_paio_error(acb); >>> 441 if (ret =3D=3D ECANCELED) { >>> 442 /* remove the request */ >>> 443 *pacb =3D acb->next; >>> 444 qemu_aio_release(acb); >>> 445 result =3D 1; >>> 446 } else if (ret !=3D EINPROGRESS) { >>> in line 444 acb got released but acb->common.opaque does not. >>> which will be released via guest OS via ide_dma_cancel which=20 >>> will in term call qcow_aio_cancel which does not check its argument >>> is in flight list or not. >>> The fix is as follows: (debian 6's qemu-kvm-0.12.5) >>> ####################################### >>> --- block/qcow2.h~ 2010-07-27 08:43:53.000000000 +0800 >>> +++ block/qcow2.h 2012-05-13 15:51:39.000000000 +0800 >>> @@ -143,6 +143,7 @@ >>> QLIST_HEAD(QCowAioDependencies, QCowAIOCB) dependent_requests; >>> =20 >>> QLIST_ENTRY(QCowL2Meta) next_in_flight; >>> + int inflight; =20 >>> } QCowL2Meta; >>> --- block/qcow2.c~ 2012-05-13 15:57:09.000000000 +0800 >>> +++ block/qcow2.c 2012-05-13 15:57:24.000000000 +0800 >>> @@ -349,6 +349,10 @@ >>> QCowAIOCB *acb =3D (QCowAIOCB *)blockacb; >>> if (acb->hd_aiocb) >>> bdrv_aio_cancel(acb->hd_aiocb); >>> + if (acb->l2meta.inflight) { >>> + QLIST_REMOVE(&acb->l2meta, next_in_flight); >>> + acb->l2meta.inflight =3D 0; >>> + } >>> qemu_aio_release(acb); >>> } >>> =20 >>> @@ -506,6 +510,7 @@ >>> acb->n =3D 0; >>> acb->cluster_offset =3D 0; >>> acb->l2meta.nb_clusters =3D 0; >>> + acb->l2meta.inflight =3D 0; >>> QLIST_INIT(&acb->l2meta.dependent_requests); >>> return acb; >>> } >>> @@ -534,6 +539,7 @@ >>> /* Take the request off the list of running requests */ >>> if (m->nb_clusters !=3D 0) { >>> QLIST_REMOVE(m, next_in_flight); >>> + m->inflight =3D 0; >>> } >>> =20 >>> /* >>> @@ -632,6 +638,7 @@ >>> fail: >>> if (acb->l2meta.nb_clusters !=3D 0) { >>> QLIST_REMOVE(&acb->l2meta, next_in_flight); >>> + acb->l2meta.inflight =3D 0; >>> } >>> done: >>> if (acb->qiov->niov > 1) >>> --- block/qcow2-cluster.c~ 2010-07-27 08:43:53.000000000 +0800 >>> +++ block/qcow2-cluster.c 2012-05-13 15:53:53.000000000 +0800 >>> @@ -827,6 +827,7 @@ >>> m->offset =3D offset; >>> m->n_start =3D n_start; >>> m->nb_clusters =3D nb_clusters; >>> + m->inflight =3D 1; >>> =20 >>> out: >>> m->nb_available =3D MIN(nb_clusters << (s->cluster_bits - 9), n_= end); >>> >>> Thanks for investigation >>> Zhouyi >> >> The patch looks reasonable to me. Note however that while it fixes the >> hang, it still causes cluster leaks. I'm not sure if someone is >> interested in picking these up for old stable releases. Andreas, I thi= nk >> you were going to take 0.15? The first version that doesn't have the >> problem is 1.0. >=20 > Kevin, the policy as I understood it is to cherry-pick patches from > qemu.git into qemu-stable-x.y.git. So I don't think me applying this > patch to stable-0.15 would be right. I don't spot a particular qcow2 fi= x > among our 0.15 backports that I have now pushed. Do you have a pointer > which one(s) would fix this issue so that I can recheck? It's "fixed" as a side effect of the block layer conversion to coroutines. Not exactly the kind of patches you'd want to cherry-pick for stable-0.15. The better fix for 0.15 could be to backport the new behaviour of coroutine based requests with bdrv_aio_cancel: static void bdrv_aio_co_cancel_em(BlockDriverAIOCB *blockacb) { qemu_aio_flush(); } Using that as the implementation for qcow2_aio_cancel should be safe and fix this problem. Kevin