qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: Max Reitz <mreitz@redhat.com>
To: famz@redhat.com
Cc: kwolf@redhat.com, qemu-devel@nongnu.org, stefanha@redhat.com
Subject: Re: [Qemu-devel] [PATCH] qemu-iotests: fix 030 for faster machines
Date: Fri, 18 Oct 2013 20:17:13 +0200	[thread overview]
Message-ID: <52617B29.9040104@redhat.com> (raw)
In-Reply-To: <20131017062852.GA6258@T430s.nay.redhat.com>

On 2013-10-17 08:28, Fam Zheng wrote:
> On Wed, 10/16 20:45, Max Reitz wrote:
>> On 2013-10-15 04:41, Fam Zheng wrote:
>>> If the block job completes too fast, the test can fail. Change the
>>> numbers so the qmp events are more stably captured by the script.
>>>
>>> A sleep is removed for the same reason.
>>>
>>> Signed-off-by: Fam Zheng <famz@redhat.com>
>>> ---
>>>   tests/qemu-iotests/030 | 11 +++++------
>>>   1 file changed, 5 insertions(+), 6 deletions(-)
>>>
>>> diff --git a/tests/qemu-iotests/030 b/tests/qemu-iotests/030
>>> index ae56f3b..188b182 100755
>>> --- a/tests/qemu-iotests/030
>>> +++ b/tests/qemu-iotests/030
>>> @@ -403,14 +403,13 @@ class TestStreamStop(iotests.QMPTestCase):
>>>           result = self.vm.qmp('block-stream', device='drive0')
>>>           self.assert_qmp(result, 'return', {})
>>> -        time.sleep(0.1)
>> Hm, I'm not sure whether removing the sleep actually removes the
>> underlying race condition… It should work in most cases and the
>> foreseeable future, though.
>>
>>>           events = self.vm.get_qmp_events(wait=False)
>>>           self.assertEqual(events, [], 'unexpected QMP event: %s' % events)
>>>           self.cancel_and_wait()
>>>   class TestSetSpeed(iotests.QMPTestCase):
>>> -    image_len = 80 * 1024 * 1024 # MB
>>> +    image_len = 512 * 1024 * 1024 # MB
>>>       def setUp(self):
>>>           qemu_img('create', backing_img, str(TestSetSpeed.image_len))
>>> @@ -457,23 +456,23 @@ class TestSetSpeed(iotests.QMPTestCase):
>>>           self.assert_qmp(result, 'return[0]/device', 'drive0')
>>>           self.assert_qmp(result, 'return[0]/speed', 0)
>>> -        result = self.vm.qmp('block-job-set-speed', device='drive0', speed=8 * 1024 * 1024)
>>> +        result = self.vm.qmp('block-job-set-speed', device='drive0', speed=8 * 1024)
>> So the limit was already 8 MB/s? Doesn't this mean that the job
>> should have taken 10 seconds anyway? Sounds to me like the block job
>> speed is basically disregarded anyway.
> No, see below...
>
>> If I re-add the sleep you removed in this patch, this test fails
>> again for me. This further suggests block-job-set-speed to be kind
>> of a no-op and the changes concerning the image length and the block
>> job speed not really contributing to fixing the test.
>>
>> So I think removing the sleep is all that would have to be done
>> right now. OTOH, this is not really a permanent fix, either (the
>> fundamental race condition remains). Furthermore, I guess there is
>> some reason for having a sleep there - else it would not exist in
>> the first place (and it apparently already caused problems some time
>> ago which were "fixed" by replacing the previous "sleep(1)" by
>> "sleep(0.1)").
>>
>> All in all, if someone can assure me of the uneccessity of the sleep
>> in question, I think removing it is all that's needed.
>>
>> Max
>>
> Both failure cases are just that setting speed or checking status comes too
> late: the streaming finishes or goes close to finish in negligible no time once
> the job is started. In other words dropping the speed change but only increase
> image_len and remove sleep will fix it for me too.

Ah, sorry, I missed that those failures are two seperate test cases and 
both changes are basically independent of each other. Sorry, my fault.

Hm, well, but I'm still not happy with removing the sleep. I've thought 
of a different solution myself and didn't find any either… But the fact 
remains that there are three things that can happen:

First, the block job might finish before the cancelling QMP command gets 
sent anyway. The test script and qemu are independent of each other, so 
this may still happen (although the block device has to be really fast 
for that to happen, I guess).

Second, I'd still like an explanation why the sleep is indeed 
unnecessary. I guess its purpose is to have the block job actually 
running before cancelling it – removing the sleep might defeat that 
purpose, though I don't know how bad this is.

Third, since qemu is indeed running independently of the test script, 
the blockjob is in fact running and has not yet finished by the time it 
gets cancelled. This would be the desired result.

I admit that the first outcome is impossible for all realistic 
scenarios. However, the second one is what's making me feel uncomfortable.

Max

  reply	other threads:[~2013-10-18 18:17 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2013-10-15  2:41 [Qemu-devel] [PATCH] qemu-iotests: fix 030 for faster machines Fam Zheng
2013-10-16 18:45 ` Max Reitz
2013-10-17  6:28   ` Fam Zheng
2013-10-18 18:17     ` Max Reitz [this message]
2013-10-17 12:38 ` Stefan Hajnoczi
2013-10-30 11:45   ` Fam Zheng

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=52617B29.9040104@redhat.com \
    --to=mreitz@redhat.com \
    --cc=famz@redhat.com \
    --cc=kwolf@redhat.com \
    --cc=qemu-devel@nongnu.org \
    --cc=stefanha@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).