From: Juan Quintela <quintela@redhat.com>
To: Eric Auger <eric.auger@redhat.com>
Cc: Peter Maydell <peter.maydell@linaro.org>,
Kunkun Jiang <jiangkunkun@huawei.com>,
"Dr. David Alan Gilbert" <dgilbert@redhat.com>,
Peter Xu <peterx@redhat.com>,
"open list:All patches CC here" <qemu-devel@nongnu.org>,
lushenming@huawei.com,
"open list:ARM cores" <qemu-arm@nongnu.org>,
wanghaibin.wang@huawei.com
Subject: Re: [question] Shall we flush ITS tables into guest RAM when shutdown the VM?
Date: Thu, 22 Jul 2021 15:19:03 +0200 [thread overview]
Message-ID: <87fsw6ebaw.fsf@secure.mitica> (raw)
In-Reply-To: <fc630856-cc51-4830-9f85-4cf89e632819@redhat.com> (Eric Auger's message of "Tue, 6 Jul 2021 16:27:57 +0200")
Eric Auger <eric.auger@redhat.com> wrote:
> Hi Dave,
>
> On 7/6/21 4:19 PM, Dr. David Alan Gilbert wrote:
>> * Eric Auger (eric.auger@redhat.com) wrote:
...
>>>>>> Well, I initially wanted to know more about this scenario to determine
>>>>>> whether
>>>>>> a normal shutdown would fall into it.😂
>>>>> I think it was for save/restore use case. In that case you need to flush
>>>>> the KVM cache in memory on VM shutdown.
>>>> Sorry for late reply.
>>>>
>>>> Can we distinguish from the 'RunState'?
>>>> When we stop the VM, the RunState will be set. There are many types of
>>>> RunState, such as RUN_STATE_FINISH_MIGRATE/RUN_STATE_SAVE_VM/
>>>> RUN_STATE_SHUTDOWN/RUN_STATE_GUEST_PANICKED, etc.
>>>>
>>>> Maybe RUN_STATE_SHUTDOWN doesn't belong to save/restore use case,
>>>> right?
>>> Adding Dave, Juan and Peter in the loop for migration expertise.
>>>
>>> At the moment we save the ARM ITS MSI controller tables whenever the VM
>>> gets stopped. Saving the tables from KVM caches into the guest RAM is
>>> needed for migration and save/restore use cases.
>>> However with GICv4 this fails at KVM level because some MSIs are
>>> forwarded and saving their state is not supported with GICv4.
>>>
>>> While GICv4 migration is not supported we would like the VM to work
>>> properly, ie. being stoppable without taking care of table saving.
>>>
>>> So could we be more precise and identifiy the save/restore and migration
>>> use cases instead of saving the tables on each VM shutdown.
>> During the precopy migration (not sure about others), we do:
>>
>> static void migration_completion(MigrationState *s)
>> {
>> ....
>> ret = vm_stop_force_state(RUN_STATE_FINISH_MIGRATE);
>> ...
>> ret = qemu_savevm_state_complete_precopy(s->to_dst_file, false,
>> inactivate);
>>
>> so I think we do have that state there to hook off.
>
> That's consistent with what you suggested in the past ans what is logged
> in the commit message of
>
> cddafd8f353d2d251b1a5c6c948a577a85838582 ("hw/intc/arm_gicv3_its:
> Implement state save/restore").
Hi
Ouch, it is really a mess. Why do we need to save it to RAM instead of
saving it to anywhere else?
I guess that the answer is that we don't want to know what the state is,
so we are mgrating a opaque blob.
> However does the save/restore enters that state. If I remember
> correctly that's why I decided to do the save on each VM stop instead.
>>> The tables are saved into guest RAM so when need the CPUs and devices to
>>> be stopped but we need the guest RAM to be saved after the ITS save
>>> operation.
Saving this data into RAM dirties the bitmaps, right?
>> Yeh so what should happen is that you:
>> a) Iterate RAM a lot
>> b) You stop everything
>> -> Flushes remaining changes into RAM
>> c) Transmit device state and last bits of RAM changes.
>>
>> so that flush should happen at (b).
> That's correct.
/* does a state transition even if the VM is already stopped,
current state is forgotten forever */
int vm_stop_force_state(RunState state)
{
if (runstate_is_running()) {
return vm_stop(state);
} else {
int ret;
runstate_set(state);
bdrv_drain_all();
/* Make sure to return an error if the flush in a previous vm_stop()
* failed. */
ret = bdrv_flush_all();
trace_vm_stop_flush_all(ret);
return ret;
}
}
You really want to hook here, like the block layer.
But as far as I can see, there is no generic way to put a hook there.
And the path is different if the machine is running or not.
Thinking about how to put a hook there.
Welcome if you have a good name for the hook.
Later, Juan.
next prev parent reply other threads:[~2021-07-22 13:21 UTC|newest]
Thread overview: 12+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-06-29 9:33 [question] Shall we flush ITS tables into guest RAM when shutdown the VM? Kunkun Jiang
2021-06-29 20:14 ` Eric Auger
2021-06-30 1:38 ` Kunkun Jiang
2021-06-30 9:16 ` Eric Auger
2021-07-06 8:18 ` Kunkun Jiang
2021-07-06 13:52 ` Eric Auger
2021-07-06 14:19 ` Dr. David Alan Gilbert
2021-07-06 14:27 ` Eric Auger
2021-07-22 13:19 ` Juan Quintela [this message]
2021-07-22 13:27 ` Peter Maydell
2021-07-26 13:19 ` Kunkun Jiang
2021-07-26 13:19 ` Kunkun Jiang
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=87fsw6ebaw.fsf@secure.mitica \
--to=quintela@redhat.com \
--cc=dgilbert@redhat.com \
--cc=eric.auger@redhat.com \
--cc=jiangkunkun@huawei.com \
--cc=lushenming@huawei.com \
--cc=peter.maydell@linaro.org \
--cc=peterx@redhat.com \
--cc=qemu-arm@nongnu.org \
--cc=qemu-devel@nongnu.org \
--cc=wanghaibin.wang@huawei.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).