From: David Hildenbrand <david@redhat.com>
To: zhukeqian <zhukeqian1@huawei.com>,
"qemu-devel@nongnu.org" <qemu-devel@nongnu.org>,
Peter Maydell <peter.maydell@linaro.org>,
Igor Mammedov <imammedo@redhat.com>,
Stefan Hajnoczi <stefanha@redhat.com>
Cc: "Wanghaibin (D)" <wanghaibin.wang@huawei.com>,
yuzenghui <yuzenghui@huawei.com>,
jiangkunkun <jiangkunkun@huawei.com>,
Salil Mehta <salil.mehta@huawei.com>,
Jonathan Cameron <jonathan.cameron@huawei.com>,
"Zengtao (B)" <prime.zeng@hisilicon.com>
Subject: Re: 答复: [PATCH v1 1/2] system/cpus: Fix pause_all_vcpus() under concurrent environment
Date: Tue, 19 Mar 2024 10:24:14 +0100 [thread overview]
Message-ID: <7cc3c19c-00f0-4ad2-b0de-ba42e9b20c2a@redhat.com> (raw)
In-Reply-To: <7387988008764735b2f1dd5f2c83a45a@huawei.com>
On 19.03.24 06:06, zhukeqian wrote:
> Hi David,
>
> Thanks for reviewing.
>
> On 17.03.24 09:37, Keqian Zhu via wrote:
>>> Both main loop thread and vCPU thread are allowed to call
>>> pause_all_vcpus(), and in general resume_all_vcpus() is called after
>>> it. Two issues live in pause_all_vcpus():
>>
>> In general, calling pause_all_vcpus() from VCPU threads is quite dangerous.
>>
>> Do we have reproducers for the cases below?
>>
>
> I produce the issues by testing ARM vCPU hotplug feature:
> QEMU changes for vCPU hotplug could be cloned from below site,
> https://github.com/salil-mehta/qemu.git virt-cpuhp-armv8/rfc-v2
> Guest Kernel changes (by James Morse, ARM) are available here:
> https://git.kernel.org/pub/scm/linux/kernel/git/morse/linux.git virtual_cpu_hotplug/rfc/v2
>
Thanks for these infos (would be reasonable to include that in the cover letter).
Okay, so likely this is not actually a "fix" for upstream as it is. Understood.
> The procedure to produce problems:
> 1. Startup a Linux VM (e.g., called OS-vcpuhotplug) with 32 possible vCPUs and 16 current vCPUs.
> 2. Log in guestOS and run script[1] to continuously online/offline CPU.
> 3. At host side, run script[2] to continuously hotplug/unhotplug vCPU.
> After several minutes, we can hit these problems.
>
> Script[1] to online/offline CPU:
> for ((time=1;time<10000000;time++));
> do
> for ((cpu=16;cpu<32;cpu++));
> do
> echo 1 > /sys/devices/system/cpu/cpu$cpu/online
> done
>
> for ((cpu=16;cpu<32;cpu++));
> do
> echo 0 > /sys/devices/system/cpu/cpu$cpu/online
> done
> done
>
> Script[2] to hotplug/unhotplug vCPU:
> for ((time=1;time<1000000;time++));
> do
> echo $time
> for ((cpu=16;cpu<=32;cpu++));
> do
> echo "virsh setvcpus OS-vcpuhotplug --count $cpu --live"
> virsh setvcpus OS-vcpuhotplug --count $cpu --live
> sleep 2
> done
>
> for ((cpu=32;cpu>=16;cpu--));
> do
> echo "virsh setvcpus OS-vcpuhotplug --count $cpu --live"
> virsh setvcpus OS-vcpuhotplug --count $cpu --live
> sleep 2
> done
>
> for ((cpu=16;cpu<=32;cpu+=2));
> do
> echo "virsh setvcpus OS-vcpuhotplug --count $cpu --live"
> virsh setvcpus OS-vcpuhotplug --count $cpu --live
> sleep 2
> done
>
> for ((cpu=32;cpu>=16;cpu-=2));
> do
> echo "virsh setvcpus OS-vcpuhotplug --count $cpu --live"
> virsh setvcpus OS-vcpuhotplug --count $cpu --live
> sleep 2
> done
> done
>
> The script[1] will call PSCI CPU_ON which emulated by QEMU, which result in calling cpu_reset() on vCPU thread.
I spotted new pause_all_vcpus() / resume_all_vcpus() calls in hw/intc/arm_gicv3_kvm.c and
thought they would be the problematic bit.
Yeah, that's going to be problematic. Further note that a lot of code does not expect
that the BQL is suddenly dropped.
We had issues with that in different context where we ended up wanting to use pause/resume from VCPU context:
https://lore.kernel.org/all/294a987d-b0ef-1b58-98ac-0d4d43075d6e@redhat.com/
This sounds like a bad idea. Read below.
> For ARM architecture, it needs to reset GICC registers, which is only possible when all vcpus paused. So script[1]
> will call pause_all_vcpus() in vCPU thread.
> The script[2] also calls cpu_reset() for newly hotplugged vCPU, which is done in main loop thread.
> So this scenario causes problems as I state in commit message.
>
>>>
>>> 1. There is possibility that during thread T1 waits on qemu_pause_cond
>>> with bql unlocked, other thread has called
>>> pause_all_vcpus() and resume_all_vcpus(), then thread T1 will stuck,
>>> because the condition all_vcpus_paused() is always false.
>>
>> How can this happen?
>>
>> Two threads calling pause_all_vcpus() is borderline broken, as you note.
>>
>> IIRC, we should call pause_all_vcpus() only if some other mechanism prevents these races. For example, based on runstate changes.
>>
>
> We already has bql to prevent concurrent calling of pause_all_vcpus() and resume_all_vcpus(). But pause_all_vcpus() will
> unlock bql in the half way, which gives change for other thread to call pause and resume. In the past, code does not consider
> this problem, now I add retry mechanism to fix it.
Note that BQL did not prevent concurrent calling of pause_all_vcpus(). There had to be something else. Likely that was runstate transitions.
>
>>
>> Just imagine one thread calling pause_all_vcpus() while another one
>> calls resume_all_vcpus(). It cannot possibly work.
>
> With bql, we can make sure all vcpus are paused after pause_all_vcpus() finish, and all vcpus are resumed after resume_all_vcpus() finish.
>
> For example, the following situation may occur:
> Thread T1: lock bql -> pause_all_vcpus -> wait on cond and unlock bql -> wait T2 unlock bql to lock bql -> lock bql && all_vcpu_paused -> success and do other work -> unlock bql
> Thread T2: wait T1 unlock bql to lock bql -> lock bql -> resume_all_vcpus -> success and do other work -> unlock bql
Now trow in another thread and it all gets really complicated :)
Finding ways to avoid pause_all_vcpus() on the ARM reset code would be preferable.
I guess you simply want to do something similar to what KVM does to avoid messing
with pause_all_vcpus(): inhibiting certain IOCTLs.
commit f39b7d2b96e3e73c01bb678cd096f7baf0b9ab39
Author: David Hildenbrand <david@redhat.com>
Date: Fri Nov 11 10:47:58 2022 -0500
kvm: Atomic memslot updates
If we update an existing memslot (e.g., resize, split), we temporarily
remove the memslot to re-add it immediately afterwards. These updates
are not atomic, especially not for KVM VCPU threads, such that we can
get spurious faults.
Let's inhibit most KVM ioctls while performing relevant updates, such
that we can perform the update just as if it would happen atomically
without additional kernel support.
We capture the add/del changes and apply them in the notifier commit
stage instead. There, we can check for overlaps and perform the ioctl
inhibiting only if really required (-> overlap).
To keep things simple we don't perform additional checks that wouldn't
actually result in an overlap -- such as !RAM memory regions in some
cases (see kvm_set_phys_mem()).
To minimize cache-line bouncing, use a separate indicator
(in_ioctl_lock) per CPU. Also, make sure to hold the kvm_slots_lock
while performing both actions (removing+re-adding).
We have to wait until all IOCTLs were exited and block new ones from
getting executed.
This approach cannot result in a deadlock as long as the inhibitor does
not hold any locks that might hinder an IOCTL from getting finished and
exited - something fairly unusual. The inhibitor will always hold the BQL.
AFAIKs, one possible candidate would be userfaultfd. If a page cannot be
placed (e.g., during postcopy), because we're waiting for a lock, or if the
userfaultfd thread cannot process a fault, because it is waiting for a
lock, there could be a deadlock. However, the BQL is not applicable here,
because any other guest memory access while holding the BQL would already
result in a deadlock.
Nothing else in the kernel should block forever and wait for userspace
intervention.
Note: pause_all_vcpus()/resume_all_vcpus() or
start_exclusive()/end_exclusive() cannot be used, as they either drop
the BQL or require to be called without the BQL - something inhibitors
cannot handle. We need a low-level locking mechanism that is
deadlock-free even when not releasing the BQL.
--
Cheers,
David / dhildenb
next prev parent reply other threads:[~2024-03-19 9:25 UTC|newest]
Thread overview: 13+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-03-17 8:37 [PATCH v1 0/2] Some fixes for pause and resume all vcpus Keqian Zhu via
2024-03-17 8:37 ` [PATCH v1 1/2] system/cpus: Fix pause_all_vcpus() under concurrent environment Keqian Zhu via
2024-03-18 10:10 ` David Hildenbrand
2024-03-19 5:06 ` 答复: " zhukeqian via
2024-03-19 9:24 ` David Hildenbrand [this message]
2024-03-19 13:23 ` David Hildenbrand
2024-03-19 14:23 ` Peter Maydell
2024-03-19 14:46 ` David Hildenbrand
2024-03-19 14:56 ` Peter Maydell
2024-03-17 8:37 ` [PATCH v1 2/2] system/cpus: Fix resume_all_vcpus() under vCPU hotplug condition Keqian Zhu via
2024-03-18 10:14 ` David Hildenbrand
2024-03-19 5:11 ` 答复: " zhukeqian via
2024-03-19 9:25 ` David Hildenbrand
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=7cc3c19c-00f0-4ad2-b0de-ba42e9b20c2a@redhat.com \
--to=david@redhat.com \
--cc=imammedo@redhat.com \
--cc=jiangkunkun@huawei.com \
--cc=jonathan.cameron@huawei.com \
--cc=peter.maydell@linaro.org \
--cc=prime.zeng@hisilicon.com \
--cc=qemu-devel@nongnu.org \
--cc=salil.mehta@huawei.com \
--cc=stefanha@redhat.com \
--cc=wanghaibin.wang@huawei.com \
--cc=yuzenghui@huawei.com \
--cc=zhukeqian1@huawei.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).