From: Salil Mehta via <qemu-devel@nongnu.org>
To: David Hildenbrand <david@redhat.com>,
xianglai li <lixianglai@loongson.cn>,
"qemu-devel@nongnu.org" <qemu-devel@nongnu.org>
Cc: "Salil Mehta" <salil.mehta@opnsrc.net>,
"Xiaojuan Yang" <yangxiaojuan@loongson.cn>,
"Song Gao" <gaosong@loongson.cn>,
"Michael S. Tsirkin" <mst@redhat.com>,
"Igor Mammedov" <imammedo@redhat.com>,
"Ani Sinha" <anisinha@redhat.com>,
"Paolo Bonzini" <pbonzini@redhat.com>,
"Richard Henderson" <richard.henderson@linaro.org>,
"Eduardo Habkost" <eduardo@habkost.net>,
"Marcel Apfelbaum" <marcel.apfelbaum@gmail.com>,
"Philippe Mathieu-Daudé" <philmd@linaro.org>,
"wangyanan (Y)" <wangyanan55@huawei.com>,
"Daniel P. Berrangé" <berrange@redhat.com>,
"Peter Xu" <peterx@redhat.com>, "Bibo Mao" <maobibo@loongson.cn>
Subject: RE: [PATCH v2 04/10] Introduce the CPU address space destruction function
Date: Tue, 26 Sep 2023 11:03:53 +0000 [thread overview]
Message-ID: <042d0b2c56ae4e298a55b5fbb5fba8af@huawei.com> (raw)
In-Reply-To: <43f04ba4-3e16-ea5c-a212-66dda73a76c4@redhat.com>
Hi David,
> From: qemu-devel-bounces+salil.mehta=huawei.com@nongnu.org <qemu-devel-
> bounces+salil.mehta=huawei.com@nongnu.org> On Behalf Of David Hildenbrand
> Sent: Tuesday, September 12, 2023 8:00 AM
> To: xianglai li <lixianglai@loongson.cn>; qemu-devel@nongnu.org
> Cc: Salil Mehta <salil.mehta@opnsrc.net>; Xiaojuan Yang
> <yangxiaojuan@loongson.cn>; Song Gao <gaosong@loongson.cn>; Michael S.
> Tsirkin <mst@redhat.com>; Igor Mammedov <imammedo@redhat.com>; Ani Sinha
> <anisinha@redhat.com>; Paolo Bonzini <pbonzini@redhat.com>; Richard
> Henderson <richard.henderson@linaro.org>; Eduardo Habkost
> <eduardo@habkost.net>; Marcel Apfelbaum <marcel.apfelbaum@gmail.com>;
> Philippe Mathieu-Daudé <philmd@linaro.org>; wangyanan (Y)
> <wangyanan55@huawei.com>; Daniel P. Berrangé <berrange@redhat.com>; Peter
> Xu <peterx@redhat.com>; Bibo Mao <maobibo@loongson.cn>
> Subject: Re: [PATCH v2 04/10] Introduce the CPU address space destruction
> function
>
> On 12.09.23 04:11, xianglai li wrote:
> > Introduce new function to destroy CPU address space resources
> > for cpu hot-(un)plug.
> >
> How do other archs handle that? Or how are they able to get away without
> destroying?
This patch-set is based on the ARM RFC. We do destroy AddressSpace there.
Is there any reason you are hinting why it should not be done?
I have posted the RFC V2 Virtual CPU Hotplug Support on ARM today and
You are CC'ed in it. Please have a look at the implementation:
https://lore.kernel.org/qemu-devel/20230926100436.28284-1-salil.mehta@huawei.com/T/#m523b37819c4811c7827333982004e07a1ef03879
Thanks
Salil.
next prev parent reply other threads:[~2023-09-26 11:04 UTC|newest]
Thread overview: 36+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-09-12 2:11 [PATCH v2 00/10] Adds CPU hot-plug support to Loongarch xianglai li
2023-09-12 2:11 ` [PATCH v2 01/10] Update ACPI GED framework to support vcpu hot-(un)plug xianglai li
2023-09-12 2:11 ` [PATCH v2 02/10] Update CPUs AML with cpu-(ctrl)dev change xianglai li
2023-09-12 2:11 ` [PATCH v2 03/10] make qdev_disconnect_gpio_out_named() public xianglai li
2023-09-12 8:10 ` Philippe Mathieu-Daudé
2023-09-15 7:00 ` lixianglai
2023-09-12 2:11 ` [PATCH v2 04/10] Introduce the CPU address space destruction function xianglai li
2023-09-12 7:00 ` David Hildenbrand
2023-09-14 13:00 ` lixianglai
2023-09-14 13:26 ` David Hildenbrand
2023-09-15 2:48 ` lixianglai
2023-09-15 2:53 ` lixianglai
2023-09-15 8:07 ` David Hildenbrand
2023-09-15 9:54 ` lixianglai
2023-09-15 14:19 ` Philippe Mathieu-Daudé
2023-09-15 15:22 ` David Hildenbrand
2023-09-26 11:25 ` Salil Mehta via
2023-09-26 11:21 ` Salil Mehta via
2023-09-26 11:55 ` Salil Mehta via
2023-09-26 12:23 ` David Hildenbrand
2023-09-26 12:32 ` Salil Mehta via
2023-09-26 12:37 ` David Hildenbrand
2023-09-26 12:44 ` Salil Mehta via
2023-09-26 12:52 ` David Hildenbrand
2023-09-27 2:16 ` lixianglai
2023-09-26 11:11 ` Salil Mehta via
2023-09-26 11:06 ` Salil Mehta via
2023-09-26 11:03 ` Salil Mehta via [this message]
2023-09-12 2:11 ` [PATCH v2 05/10] Added CPU topology support for Loongarch xianglai li
2023-09-12 2:11 ` [PATCH v2 06/10] Optimize loongarch_irq_init function implementation xianglai li
2023-09-12 2:11 ` [PATCH v2 07/10] Add basic CPU hot-(un)plug support for Loongarch xianglai li
2023-09-12 2:11 ` [PATCH v2 08/10] Add support of *unrealize* for Loongarch cpu xianglai li
2023-09-12 2:11 ` [PATCH v2 09/10] Add generic event device for Loongarch xianglai li
2023-09-12 2:11 ` [PATCH v2 10/10] Update the ACPI table for the Loongarch CPU xianglai li
2023-09-12 9:08 ` [PATCH v2 00/10] Adds CPU hot-plug support to Loongarch Salil Mehta via
2023-09-13 3:52 ` lixianglai
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=042d0b2c56ae4e298a55b5fbb5fba8af@huawei.com \
--to=qemu-devel@nongnu.org \
--cc=anisinha@redhat.com \
--cc=berrange@redhat.com \
--cc=david@redhat.com \
--cc=eduardo@habkost.net \
--cc=gaosong@loongson.cn \
--cc=imammedo@redhat.com \
--cc=lixianglai@loongson.cn \
--cc=maobibo@loongson.cn \
--cc=marcel.apfelbaum@gmail.com \
--cc=mst@redhat.com \
--cc=pbonzini@redhat.com \
--cc=peterx@redhat.com \
--cc=philmd@linaro.org \
--cc=richard.henderson@linaro.org \
--cc=salil.mehta@huawei.com \
--cc=salil.mehta@opnsrc.net \
--cc=wangyanan55@huawei.com \
--cc=yangxiaojuan@loongson.cn \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).