From: QingFeng Hao <haoqf@linux.vnet.ibm.com>
To: Stefan Hajnoczi <stefanha@gmail.com>, Kevin Wolf <kwolf@redhat.com>
Cc: Fam Zheng <famz@redhat.com>, qemu block <qemu-block@nongnu.org>,
Cornelia Huck <cohuck@redhat.com>,
qemu-devel <qemu-devel@nongnu.org>,
Christian Borntraeger <borntraeger@de.ibm.com>,
Stefan Hajnoczi <stefanha@redhat.com>
Subject: Re: [Qemu-devel] [Qemu-block] [PATCH v1 1/1] iotests: fix test case 185
Date: Wed, 21 Mar 2018 11:17:33 +0800 [thread overview]
Message-ID: <c4b58739-bfd3-6d2f-f464-8ffd66b75ff1@linux.vnet.ibm.com> (raw)
In-Reply-To: <2ad76fc0-6da9-9911-2dcd-335f7715fdef@linux.vnet.ibm.com>
在 2018/3/21 11:12, QingFeng Hao 写道:
>
>
> 在 2018/3/20 22:31, Stefan Hajnoczi 写道:
>> On Tue, Mar 20, 2018 at 11:11:01AM +0100, Kevin Wolf wrote:
>>> Am 19.03.2018 um 18:53 hat Christian Borntraeger geschrieben:
>>>>
>>>>
>>>> On 03/19/2018 05:10 PM, Stefan Hajnoczi wrote:
>>>>> On Mon, Mar 19, 2018 at 9:35 AM, QingFeng Hao
>>>>> <haoqf@linux.vnet.ibm.com> wrote:
>>>>>> Test case 185 failed since commit 4486e89c219 --- "vl: introduce
>>>>>> vm_shutdown()".
>>>>>> It's because of the newly introduced function vm_shutdown calls
>>>>>> bdrv_drain_all,
>>>>>> which is called later by bdrv_close_all. bdrv_drain_all resumes
>>>>>> the jobs
>>>>>> that doubles the speed and offset is doubled.
>>>>>> Some jobs' status are changed as well.
>>>>>>
>>>>>> Thus, let's not call bdrv_drain_all in vm_shutdown.
>>>>>>
>>>>>> Signed-off-by: QingFeng Hao <haoqf@linux.vnet.ibm.com>
>>>>>> ---
>>>>>> cpus.c | 5 +++--
>>>>>> 1 file changed, 3 insertions(+), 2 deletions(-)
>>>>>>
>>>>>> diff --git a/cpus.c b/cpus.c
>>>>>> index 2e6701795b..ae2962508c 100644
>>>>>> --- a/cpus.c
>>>>>> +++ b/cpus.c
>>>>>> @@ -1006,8 +1006,9 @@ static int do_vm_stop(RunState state, bool
>>>>>> send_stop)
>>>>>> qapi_event_send_stop(&error_abort);
>>>>>> }
>>>>>> }
>>>>>> -
>>>>>> - bdrv_drain_all();
>>>>>> + if (send_stop) {
>>>>>> + bdrv_drain_all();
>>>>>> + }
>>>>>
>>>>> Thanks for looking into this bug!
>>>>>
>>>>> This if statement breaks the contract of the function when send_stop
>>>>> is false. Drain ensures that pending I/O completes and that device
>>>>> emulation finishes before this function returns. This is important
>>>>> for live migration, where there must be no more guest-related activity
>>>>> after vm_stop().
>>>>>
>>>>> This patch relies on the caller invoking bdrv_close_all() immediately
>>>>> after do_vm_stop(), which is not documented and could lead to more
>>>>> bugs in the future.
>>>>>
>>>>> I suggest a different solution:
>>>>>
>>>>> Test 185 relies on internal QEMU behavior which can change from time
>>>>> to time. Buffer sizes and block job iteration counts are
>>>>> implementation details that are not part of qapi-schema.json or other
>>>>> documented behavior.
>>>>>
>>>>> In fact, the existing test case was already broken in IOThread mode
>>>>> since iothread_stop_all() -> bdrv_set_aio_context() also includes a
>>>>> bdrv_drain() with the side-effect of an extra blockjob iteration.
>>>>>
>>>>> After 4486e89c219 ("vl: introduce vm_shutdown()") both IOThread and
>>>>> non-IOThread mode perform the drain. This has caused the test
>>>>> failure.
>>>>>
>>>>> Instead of modifying QEMU to keep the same arbitrary internal behavior
>>>>> as before, please send a patch that updates tests/qemu-iotests/185.out
>>>>> with the latest output.
>>>>
>>>> Wouldnt it be better if the test actually masks out the values the
>>>> are not
>>>> really part of the "agreed upon" behaviour? Wouldnt block_job_offset
>>>> from
>>>> common.filter be what we want?
>>>
>>> The test case has the following note:
>>>
>>> # Note that the reference output intentionally includes the 'offset'
>>> field in
>>> # BLOCK_JOB_CANCELLED events for all of the following block jobs.
>>> They are
>>> # predictable and any change in the offsets would hint at a bug in
>>> the job
>>> # throttling code.
>>>
>>> Now the question is if this particular change is okay rather than
>>> hinting at a bug and we should update the reference output or whether we
>>> need to change qemu again.
>>>
>>> I think the change isn't really bad, but we are doing more useless work
>>> now than we used to do before, and we're exiting slower because we're
>>> spawning additional I/O that we have to wait for. So the better state
>>> was certainly the old one.
>>
>> Here is the reason for the additional I/O and how it could be
>> eliminated:
>>
>> child_job_drained_begin() pauses and child_job_drained_end() resumes the
>> job. Resuming the job cancels the timer (if pending) and enters the
>> blockjob's coroutine. This is why draining BlockDriverState forces an
>> extra iteration of the blockjob.
>>
>> The current behavior is intended for the case where the job has I/O
>> pending at child_job_drained_begin() time. When the I/O completes, the
>> job will see it must pause and it will yield at a pause point. It makes
>> sense for child_job_drained_end() to resume the job so it can start I/O
>> again.
>>
>> In our case this behavior doesn't make sense though. The job already
>> yielded before child_job_drained_begin() and it doesn't need to resume
>> at child_job_drained_end() time. We'd prefer to wait for the timer
>> expiry.
> Thanks for all of your good comments! I think the better way is to
> filter out this case but I am sure if this is a proper way to do it that
> adds a yielded field in struct BlockJob to record if it's yielded by
> block_job_do_yield. However, this way can only solve the offset problem
> but not the status. Maybe we need also to change 185.out? I am bit
> confused. thanks
What I did is setting job->yielded in block_job_do_yield and adding the
check for it in block_job_pause and block_job_resume that if yielded is
true, just return.
>>
>> We need to distinguish these two cases to avoid resuming the job in the
>> latter case. I think this would be the proper way to avoid unnecessary
>> blockjob activity across drain.
>>
>> I favor updating the test output though, because the code change adds
>> complexity and the practical benefit is not obvious to me.
>>
>> QingFeng, if you decide to modify blockjobs, please CC Jeffrey Cody
>> <jcody@redhat.com> and I'd also be happy to review the patch
>>
>> Stefan
>>
>
--
Regards
QingFeng Hao
prev parent reply other threads:[~2018-03-21 3:17 UTC|newest]
Thread overview: 8+ messages / expand[flat|nested] mbox.gz Atom feed top
2018-03-19 9:35 [Qemu-devel] [PATCH v1 0/1] iotests: fix test case 185 QingFeng Hao
2018-03-19 9:35 ` [Qemu-devel] [PATCH v1 1/1] " QingFeng Hao
2018-03-19 16:10 ` [Qemu-devel] [Qemu-block] " Stefan Hajnoczi
2018-03-19 17:53 ` Christian Borntraeger
2018-03-20 10:11 ` Kevin Wolf
2018-03-20 14:31 ` Stefan Hajnoczi
2018-03-21 3:12 ` QingFeng Hao
2018-03-21 3:17 ` QingFeng Hao [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=c4b58739-bfd3-6d2f-f464-8ffd66b75ff1@linux.vnet.ibm.com \
--to=haoqf@linux.vnet.ibm.com \
--cc=borntraeger@de.ibm.com \
--cc=cohuck@redhat.com \
--cc=famz@redhat.com \
--cc=kwolf@redhat.com \
--cc=qemu-block@nongnu.org \
--cc=qemu-devel@nongnu.org \
--cc=stefanha@gmail.com \
--cc=stefanha@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).