From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:54003) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1VWh4w-0004Rw-Aj for qemu-devel@nongnu.org; Thu, 17 Oct 2013 02:29:08 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1VWh4q-00018f-8z for qemu-devel@nongnu.org; Thu, 17 Oct 2013 02:29:02 -0400 Received: from mx1.redhat.com ([209.132.183.28]:57974) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1VWh4q-00018a-0c for qemu-devel@nongnu.org; Thu, 17 Oct 2013 02:28:56 -0400 Received: from int-mx12.intmail.prod.int.phx2.redhat.com (int-mx12.intmail.prod.int.phx2.redhat.com [10.5.11.25]) by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id r9H6Stk4024048 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK) for ; Thu, 17 Oct 2013 02:28:55 -0400 Date: Thu, 17 Oct 2013 14:28:52 +0800 From: Fam Zheng Message-ID: <20131017062852.GA6258@T430s.nay.redhat.com> References: <1381804911-3664-1-git-send-email-famz@redhat.com> <525EDEB9.5030603@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline In-Reply-To: <525EDEB9.5030603@redhat.com> Content-Transfer-Encoding: quoted-printable Subject: Re: [Qemu-devel] [PATCH] qemu-iotests: fix 030 for faster machines Reply-To: famz@redhat.com List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Max Reitz Cc: kwolf@redhat.com, qemu-devel@nongnu.org, stefanha@redhat.com On Wed, 10/16 20:45, Max Reitz wrote: > On 2013-10-15 04:41, Fam Zheng wrote: > >If the block job completes too fast, the test can fail. Change the > >numbers so the qmp events are more stably captured by the script. > > > >A sleep is removed for the same reason. > > > >Signed-off-by: Fam Zheng > >--- > > tests/qemu-iotests/030 | 11 +++++------ > > 1 file changed, 5 insertions(+), 6 deletions(-) > > > >diff --git a/tests/qemu-iotests/030 b/tests/qemu-iotests/030 > >index ae56f3b..188b182 100755 > >--- a/tests/qemu-iotests/030 > >+++ b/tests/qemu-iotests/030 > >@@ -403,14 +403,13 @@ class TestStreamStop(iotests.QMPTestCase): > > result =3D self.vm.qmp('block-stream', device=3D'drive0') > > self.assert_qmp(result, 'return', {}) > >- time.sleep(0.1) >=20 > Hm, I'm not sure whether removing the sleep actually removes the > underlying race condition=E2=80=A6 It should work in most cases and the > foreseeable future, though. >=20 > > events =3D self.vm.get_qmp_events(wait=3DFalse) > > self.assertEqual(events, [], 'unexpected QMP event: %s' % ev= ents) > > self.cancel_and_wait() > > class TestSetSpeed(iotests.QMPTestCase): > >- image_len =3D 80 * 1024 * 1024 # MB > >+ image_len =3D 512 * 1024 * 1024 # MB > > def setUp(self): > > qemu_img('create', backing_img, str(TestSetSpeed.image_len)) > >@@ -457,23 +456,23 @@ class TestSetSpeed(iotests.QMPTestCase): > > self.assert_qmp(result, 'return[0]/device', 'drive0') > > self.assert_qmp(result, 'return[0]/speed', 0) > >- result =3D self.vm.qmp('block-job-set-speed', device=3D'drive= 0', speed=3D8 * 1024 * 1024) > >+ result =3D self.vm.qmp('block-job-set-speed', device=3D'drive= 0', speed=3D8 * 1024) >=20 > So the limit was already 8 MB/s? Doesn't this mean that the job > should have taken 10 seconds anyway? Sounds to me like the block job > speed is basically disregarded anyway. No, see below... >=20 > If I re-add the sleep you removed in this patch, this test fails > again for me. This further suggests block-job-set-speed to be kind > of a no-op and the changes concerning the image length and the block > job speed not really contributing to fixing the test. >=20 > So I think removing the sleep is all that would have to be done > right now. OTOH, this is not really a permanent fix, either (the > fundamental race condition remains). Furthermore, I guess there is > some reason for having a sleep there - else it would not exist in > the first place (and it apparently already caused problems some time > ago which were "fixed" by replacing the previous "sleep(1)" by > "sleep(0.1)"). >=20 > All in all, if someone can assure me of the uneccessity of the sleep > in question, I think removing it is all that's needed. >=20 > Max >=20 Both failure cases are just that setting speed or checking status comes t= oo late: the streaming finishes or goes close to finish in negligible no tim= e once the job is started. In other words dropping the speed change but only inc= rease image_len and remove sleep will fix it for me too. Fam