From: Salil Mehta via <qemu-devel@nongnu.org>
To: Gavin Shan <gshan@redhat.com>,
"qemu-devel@nongnu.org" <qemu-devel@nongnu.org>,
"qemu-arm@nongnu.org" <qemu-arm@nongnu.org>,
"mst@redhat.com" <mst@redhat.com>
Cc: "maz@kernel.org" <maz@kernel.org>,
"jean-philippe@linaro.org" <jean-philippe@linaro.org>,
Jonathan Cameron <jonathan.cameron@huawei.com>,
"lpieralisi@kernel.org" <lpieralisi@kernel.org>,
"peter.maydell@linaro.org" <peter.maydell@linaro.org>,
"richard.henderson@linaro.org" <richard.henderson@linaro.org>,
"imammedo@redhat.com" <imammedo@redhat.com>,
"andrew.jones@linux.dev" <andrew.jones@linux.dev>,
"david@redhat.com" <david@redhat.com>,
"philmd@linaro.org" <philmd@linaro.org>,
"eric.auger@redhat.com" <eric.auger@redhat.com>,
"will@kernel.org" <will@kernel.org>,
"ardb@kernel.org" <ardb@kernel.org>,
"oliver.upton@linux.dev" <oliver.upton@linux.dev>,
"pbonzini@redhat.com" <pbonzini@redhat.com>,
"rafael@kernel.org" <rafael@kernel.org>,
"borntraeger@linux.ibm.com" <borntraeger@linux.ibm.com>,
"alex.bennee@linaro.org" <alex.bennee@linaro.org>,
"npiggin@gmail.com" <npiggin@gmail.com>,
"harshpb@linux.ibm.com" <harshpb@linux.ibm.com>,
"linux@armlinux.org.uk" <linux@armlinux.org.uk>,
"darren@os.amperecomputing.com" <darren@os.amperecomputing.com>,
"ilkka@os.amperecomputing.com" <ilkka@os.amperecomputing.com>,
"vishnu@os.amperecomputing.com" <vishnu@os.amperecomputing.com>,
"karl.heubaum@oracle.com" <karl.heubaum@oracle.com>,
"miguel.luis@oracle.com" <miguel.luis@oracle.com>,
"salil.mehta@opnsrc.net" <salil.mehta@opnsrc.net>,
zhukeqian <zhukeqian1@huawei.com>,
"wangxiongfeng (C)" <wangxiongfeng2@huawei.com>,
"wangyanan (Y)" <wangyanan55@huawei.com>,
"jiakernel2@gmail.com" <jiakernel2@gmail.com>,
"maobibo@loongson.cn" <maobibo@loongson.cn>,
"lixianglai@loongson.cn" <lixianglai@loongson.cn>,
"shahuang@redhat.com" <shahuang@redhat.com>,
"zhao1.liu@intel.com" <zhao1.liu@intel.com>,
Linuxarm <linuxarm@huawei.com>
Subject: RE: [PATCH RFC V3 17/29] arm/virt: Release objects for *disabled* possible vCPUs after init
Date: Fri, 23 Aug 2024 13:17:36 +0000 [thread overview]
Message-ID: <aa6bf3294acb4b3f9819c77c94af67c9@huawei.com> (raw)
In-Reply-To: <13ab0fda-fea3-47a6-8c17-006e6bd4430a@redhat.com>
Hi Gavin,
> From: Gavin Shan <gshan@redhat.com>
> Sent: Friday, August 23, 2024 11:52 AM
> To: Salil Mehta <salil.mehta@huawei.com>; qemu-devel@nongnu.org;
> qemu-arm@nongnu.org; mst@redhat.com
>
> Hi Sail,
>
> On 8/22/24 8:58 PM, Salil Mehta wrote:
> >> On 8/21/24 8:23 PM, Salil Mehta wrote:
> >> >>
> >> >> On 8/21/24 2:40 AM, Salil Mehta wrote:
> >> >> >
> >> >> > I don’t understand this clearly. Are you suggesting to reuse only
> >> >> > single vCPU object to initialize all KVM vCPUs not yet plugged? If
> >> >> > yes, then I'm not sure what do we gain here by adding this complexity?
> >> >> > It does not consume time or resources because we are not realizing any
> >> >> > of these vCPU object in any case.
> >> >> >
> >> >>
> >> >> First of all, it seems we have different names and terms for those cold-
> >> >> booted vCPUs and hotpluggable vCPUs. For example, vCPU-0 and vCPU-1
> >> >> are cold-booted vCPUs while
> >> >> vCPU-2 and vCPU-3 are hotpluggable vCPUs when we have '-smp
> >> >> maxcpus=4,cpus=2'. Lets stick to convention and terms for easier discussion.
> >> >>
> >> >> The idea is avoid instantiating hotpluggable vCPUs in virtmach_init() and
> >> >> released in the same function for those hotpluggable vCPUs. As I can
> >> >> understand, those hotpluggable vCPU instances are serving for two
> >> >> purposes: (1) Relax the constraint that all vCPU's (kvm) file descriptor have
> >> >> to be created and populated;
> >> >
> >> >
> >> > We are devising *workarounds* in Qemu for the ARM CPU architectural
> >> > constraints in KVM and in Guest Kernel, *not relaxing* them. We are
> >> > not allowed to meddle with the constraints. That is the whole point.
> >> >
> >> > Not having to respect those constraints led to rejection of the
> >> > earlier attempts to upstream Virtual CPU Hotplug for ARM.
> >> >
> >>
> >> I meant to 'overcome' the constraints by 'relax'. My apologies if there're any
> >> caused confusions.
> >
> >
> > Ok. No issues. It was important for me to clarify though.
> >
> >
> > Previously, you had attempt to create all vCPU objects
> >> and reuse them when vCPU hot added.
> >
> > Yes, at QOM level. But that approach did not realize the
> > unplugged/yet-to-be-plugged vCPUs. We were just using QOM vCPU objects
> > as the place holders
> >
>
> Right, my point was actually vCPU objects are too heavy as the place
> holders. It was reason why I had the concern: why those hotpluggable vCPU
> objects can't be avoided during the bootup time.
Sure, to list down again. For the reasons I've already explained :
1. KVM MUST have details about all the vCPUs and its features finalized before the VGIC
initialization. This is ARM CPU Architecture constraint. This is immutable requirement!
2. QOM vCPUs has got representational changes of the KVM vCPU like vcpu-id, features
list etc. These MUST be finalized even before QOM begins its own GICv3 initialization
which will end up in initialization of KVM VGIC. QOM vCPUs MUST be handed over to
the GICV3 QOM in fully initialized state. The same reason applies here as well.
Till this point we are architecture compliant. The only place where we are not ARM
Architecture compliant is where in QOM, we dissociate the vCPU state with GICV3
CPU State during unplug action or for the yet-to-be-plugged vCPUs. Later are
released as part of virt_cpu_post_init() in our current design.
> > In current implementation, the
> >> hotpluggable vCPUs are instantiated and released pretty soon. I was
> >> bringing the third possibility, to avoid instantiating those hotpluggable vCPU
> >> objects, for discussion.
> >
> >
> > Are you suggesting not calling KVM_ARM_VCPU_INIT IOCTL as all for the
> > vCPUs which are part of possible list but not yet plugged?
> >
> > If yes, we cannot do that as KVM vCPUs should be fully initialized
> > even before VGIC is initialized inside the KVM. This is a constraint.
> > I've explained this in detail in the cover letter of this patch-set
> > and in the slides I have shared earlier.
> >
>
> No, it's not what I was suggesting. What I suggested is to avoid creating
> those hotpluggable vCPU objects (place holders) during the bootup time.
> However, all vCPU file descriptors (KVM
> objects) still need to be created and initialized before GICv3 is initialized. It's
> one of the constrains. So we need to create and populate all vCPU file
> descriptors through ioctl(vm_fd, CREATE_VCPU) and ioctl(vcpu_fd,
> INIT_VCPU) before GICv3 object is created and realized. As I explained in
> the previous reply, the hotpluggable vCPU objects (place holders) haven't
> to be created in order to initialize and populate the vCPU file descriptors for
> those hotpluggable vCPUs. I think the parameters used to create and
> initialize vCPU-0's file descriptor can be reused by all other vCPUs, because
> we don't support heterogeneous vCPUs.
> What I suggested is something like below: the point is to avoid instantiating
> those hotpluggable vCPUs, but their vCPU file descriptor (KVM object) are
> still created and initialized.
>
> static void machvirt_init(MachineState *machine)
> {
>
> /*
> * Instantiate and realize vCPU-0, record the parameter passed to
> * ioctl(vcpu-fd, VCPU_INIT, &init), or a better place to remember the
> parameter.
> * The point is the parameter can be shared by all vCPUs.
> */
>
> /*
> * Create vCPU descriptors for all other vCPUs (including hotpluggable
> vCPUs).
> * The remembered parameter is reused and passed to ioctl(vcpu-fd,
> VCPU_INIT, &init).
> */
>
> /* Instanaite and realize other cold-booted vCPUs */
>
> /* Instantiate and realize GICv3 */
>
> }
No. For the reasons I've mentioned above, we MUST provide fully initialize the QOM
vCPU objects before initialization of QOM GICV3 kicks in. This ensures that nothing
breaks during initialization process of the QOM GICV3. Therefore, the optimization
steps mentioned above are unnecessary and could cause problems in future.
Additionally, the evolution of the GICV3 QOM can be independent of the ARM Virt
Machine as it can be used with other Machines as well so we MUST treat it as a black
box which needs QOM vCPU objects as inputs during its initialization.
> > In this series, the life cycle of those hotpluggable
> >> vCPU objects are really short. Again, I didn't say we must avoid instantiating
> >> those vCPU objects, I brought the topic ONLY for discussion.
> >
> > Sure, I appreciate that. For the details of the reasons please follow below:
> >
> > 1. Cover Letter of this patch-set (Constraints are explained there) 2.
> > KVMForum Slides of 2020 and 2023
> >
> >
> >> > (2) Help to instantiate and realize
> >> >> GICv3 object.
> >> >>
> >> >> For (1), I don't think we have to instantiate those hotpluggable vCPUs at all.
> >> >> In the above example where we have command line '-smp
> >> >> maxcpus=4,cpus=2', it's unnecessary to instantiate
> >> >> vCPU-3 and vCPU-4 to create and populate their KVM file descriptors.
> >> >
> >> >
> >> > We cannot defer create vCPU in KVM after GIC has been initialized in KVM.
> >> > It needs to know every vCPU that will ever exists right at the time it
> >> > is getting Initialized. This is an ARM CPU Architectural constraint.
> >> >
> >>
> >> It will be appreciated if more details other than 'an ARM CPU architectural constraint'
> >> can be provided. I don't understand this constrain very well at least.
> >
> >
> > We cannot do that as we MUST present KVM vCPUs to the VGIC fully
> > initialized, even before it starts its initialization. Initialization
> > of the vCPUs also initializes the system registers for the corresponding KVM vCPU.
> >
> > For example, MPIDR_EL1 must be initialized at VCPU INIT time. We
> > cannot avoid this. MPIDR value is used by VGIC during its
> > initialization. This MUST be present for all of the possible KVM vCPUs
> > right from start during vgic_init()
> >
> > Please check the cover letter of this patch-set, I explained these
> > there and the KVMForum slides. Please review and comment there and
> > let me know what is not clear from the text.
> >
>
> It seems my suggestion wasn't fully understood. I was suggesting to avoid
> instantiating those hotpluggable vCPU objects (place holders) during QEMU
> startup time. All vCPU file descriptors (the vCPU's corresponding objects)
> still need to be in place before GICv3 object is initiated and realized.
We need both QOM and KVM vCPU objects fully initialized even before QOM GICV3
and hence KVM VGIC starts to initialize itself for the reasons I've explained above.
Yes, there is a duplicity especially when we are not supporting heterogenous
computing but let us not add weight of that optimization to the current patch-set.
> >> > A
> >> >> vCPU's KVM file descriptor is create and populated by the following ioctls
> >> >> and function calls. When the first vCPU (vCPU-0) is realized, the property
> >> >> corresponding to "&init" is fixed for all vCPUs. It means all vCPUs have same
> >> >> properties except the "vcpu_index".
> >> >>
> >> >> ioctl(vm-fd, KVM_CREATE_VCPU, vcpu_index);
> >> >> ioctl(vcpu-fd, KVM_ARM_VCPU_INIT, &init);
> >> >> kvm_park_vcpu(cs);
> >> >>
> >> >> A vCPU's properties are determined by two sources and both are global. It
> >> >> means all vCPUs should have same properties: (a) Feature registers
> >> >> returned from the host. The function
> >> >> kvm_arm_get_host_cpu_features() is called for once, meaning this source
> >> >> is same to all vCPUs;
> >> >
> >> >
> >> > Sure, but what are you trying to save here?
> >> >
> >>
> >> As mentioned above, the life cycle of those hotpluggable vCPU objects are
> >> really short. They still consume time and memory to instantiate them. If I'm
> >> correct, one of the primary goal for vCPU hotplug feature is to save system
> >> boot-up time, correct?
> >
> >
> > Correct. We targeted vCPU hotplug for Kata-containers for on-demand
> > resource allocation and saving the resources. Kata-containers can work
> > with different types of VMM like Qemu and microVMs like Firecracker.
> > AFAIK, Usecase of microVM is slightly different than the normal
> > containers. They are short lived (say around
> > 15 min) and require ultrafast boot-up times (say less than 125 ms) -
> > these figures are from Amazon who invented the concept of microVM in
> the earlier decade.
> >
> > With the current patches, we have only partially achieved what we had
> > started i.e. Kata/Qemu but we also want to target Kata/microVM. In our
> > case, we want that microVM to be Qemu based fro ARM. I think x86
> > already has reduced lots of legacy stuff and created a microVM in
> > Qemu. I'm not sure how it compares against the true microVM like
> > Firecracker. It will be a good target to reduce memory foot print of
> > ARM Qemu Virt Machine. or think or creating a new one just like x86.
> > Using the vCPU Hotplug feature we were drastically able to reduce the
> > boot up times of Qemu. Please check the calibrated performance figures
> > in KVmForum 2023 slide 19G (Page 26) [1]
> >
> > [1]
> > https://kvm-
> forum.qemu.org/2023/Challenges_Revisited_in_Supporting_Vir
> > t_CPU_Hotplug_-__ii0iNb3.pdf
> >
> > Last year, I had prototyped a microVM for ARM, Michael Tsirkin
> > suggested that if the performance number of the ARM Virt machine can
> > match the x86 microVM then we might not require an explicit microVM
> > code for ARM. Hence, my current efforts are to reduce the memory foot
> > print of existing Virt machine. But I do have a rough prototype of
> > microVM as well. We can debate about that later in a different discussion.
> >
>
> Thanks for the linker to the slides. Yeah, it's nice to reduce the bootup time
> and memory footprint if possible. vCPU hotplug feature may help to
> improve the performance, but all other paths might also impact the
> performance. In summary, it's a comprehensive goal to reduce the memory
> footprint and bootup time, and other components (paths) need
> optimization to achieve this goal.
Agreed, there is a lot of scope for that. But let's prioritize and do first things first.
1. We want this patch-set to get accepted after all fixes. This is the main goal.
2. Then think of optimizations like bootup and memory footprint reduction.
> >> >> (b) The parameters provided by user through '-cpu host,sve=off' are
> >> >> translated to global properties and applied to all vCPUs when they're
> >> >> instantiated.
> >> >
> >> >
> >> > Sure. Same is the case with PMU and other per-vCPU parameters.
> >> > We do not support heterogenous computing and therefore we do not have
> >> > per-vCPU control of these features as of now.
> >> >
> >> >
> >> >>
> >> >> (a) (b)
> >> >>
> >> >> aarch64_host_initfn qemu_init
> >> >> kvm_arm_set_cpu_features_from_host parse_cpu_option
> >> >> kvm_arm_get_host_cpu_features
> >> cpu_common_parse_features
> >> >> qdev_prop_register_global
> >> >> :
> >> >> device_post_init
> >> >>
> >> >> qdev_prop_set_globals
> >> >
> >> >
> >> > Sure, I understand the code flow but what are you trying to suggest here?
> >> >
> >>
> >> I tried to explain that vCPU object isn't needed to create and populate
> >> vCPU's file descriptors, as highlight in (1). The information used to create the
> >> cold-booted
> >> vCPU-0 can be reused because all vCPUs have same properties and feature
> >> set.
> >
> >
> > It does not matter. We use those QOM vCPU object states to initializes
> > Qemu
> > GICv3 model with max possible vCPUs and then release the QOM vCPU objects.
> > which are yet-to-be-plugged.
> >
>
> It's what has been implemented in this series. My concern remains: why
> those vCPU hotpluggable objects can't be avoided? Again, their
> corresponding vCPU file descritpors (KVM vCPU objects) still have to be in
> place before GICv3 is instantiated and realized.
Answered above.
> >> >> For (2), I'm still looking into the GICv3 code for better understanding.
> >> >
> >> >
> >> > Oh, I thought you said you've finished your reviews 😊
> >> >
> >> > Please take your time. For your reference, you might want to check:
> >> >
> >> > KVMForum 2023:
> >> > https://kvm-
> >> forum.qemu.org/2023/Challenges_Revisited_in_Supporting_Vir
> >> > t_CPU_Hotplug_-__ii0iNb3.pdf
> >> > https://kvm-forum.qemu.org/2023/KVM-forum-cpu-
> >> hotplug_7OJ1YyJ.pdf
> >> >
> >> > KVMForum 2020:
> >> > https://kvm-
> >> forum.qemu.org/2020/Oct%2029_Challenges%20in%20Supporting%
> >> >
> >>
> 20Virtual%20CPU%20Hotplug%20in%20SoC%20Based%20Systems%20like%2
> >> 0ARM64_
> >> > Salil%20Mehta.pdf
> >> >
> >>
> >> hmm, 'finished my review' has been misread frankly. By that, I meant I
> >> finished my tests and provided all the comments I had. Some of them
> are
> >> questions and discussions, which I still need to follow up.
> >
> >
> > Sure. No worries. Even if you miss, you will have more chance to
> > comment on upcoming RFC V4 😊
> >
>
> Ok :-)
>
> >
> >> > Until
> >> >> now, I don't see we need the instantiated hotpluggable vCPUs either.
> >> >
> >> >
> >> > I think, I've already answered this above it is because of ARM Architectural
> >> constraint.
> >> >
> >> >
> >> > For
> >> >> example, the redistributor regions can be exposed based on 'maxcpus'
> >> >> instead of 'cpus'.
> >> >
> >> > You mean during the review of the code you found that we are not doing
> >> it?
> >> >
> >>
> >> It's all about the discussion to the possibility to avoid instantiating
> >> hotpluggable vCPU objects.
> >
> >
> > As mentioned above, with the current KVM code you cannot. But if we
> > really want to then perhaps we would need to change KVM.
> >
> > I might be wrong but AFAICS I don’t see a reason why we cannot have
> > something like *batch* KVM vCPU create and initialize instead of
> > current sequential KVM operations. This will avoid multiple calls to
> > the KVM Host and can improve Qemu init time further. But this will
> > require a separate discussion in the LKML including all the KVM folks.
> >
> > This has potential to delay the vCPU hotplug feature acceptance and
> > I'm really not in favor of that. We have already stretched it a lot
> > because of the standards change acceptance earlier.
> >
>
> Again, my suggestion wasn't completely understood. I was suggesting to
> avoid instantiating those hotpluggable objects, but their vCPU file
> descriptors still need to be in place before GICv3's instantiation and
> realization.
> Yes, I was also concerned that too much code changes would be needed if
> my suggestion is accepted. It will definitely delay the feature's upstreaming
> process. It's why I said the topic (to avoid the hotpluggable objects) are just
> for discussion now. We can do it (as separate optimization) after your
> current implementation is merged.
Answered above, why we cannot and should not do that.
> >> > The IRQ connection and teardown can be dynamically
> >> >> done by connecting the board with GICv3 through callbacks in
> >> >> ARMGICv3CommonClass.
> >> >> The connection between GICv3CPUState and CPUARMState also can be
> >> >> done dynamically.
> >> >
> >> > Are you suggesting this after reviewing the code or you have to review
> >> > it yet? 😉
> >> >
> >>
> >> I was actually trying to ask for your input and feedback. I was hoping your
> >> input to clear my puzzles: why vCPU objects must be in place to create
> >> GICv3 object?
> >> Is it possible to create the GICv3 object without those vCPU objects?
> >
> >
> > No. VGIC initializes IRQs to target KVM vCPUs, it would expect same
> > KVM vCPU MPIDR or MP-AFFINITY configured when KVM vCPUs were
> > initialized at the first place otherwise the VGIC initialization will not happen
> correctly. Hence, the sequence.
> >
> > The sequence of these initialization is generally strictly controlled
> > by specification which is closely tied up with hardware including powering
> up initializations.
> > You will need to honor the expectations of the KVM VGIC init which in
> > turn are ARM CPU Architecture specification compliant. It is not just a
> loosely written code.
> >
>
> umm, As I explained from the beginning, all KVM vCPU file descriptors are
> still in place before GICv3 is instantiated and realized. With those KVM vCPU
> file descriptor, we shouldn't have problem except more code changes are
> needed, or I miss something? :)
Same as explained above.
> > What
> >> kinds of efforts we need to avoid instantiating those hotpluggable vCPU
> >> objects.
> >
> >
> > I mentioned one of the ways above. Introduce *batch* KVM vCPU create &
> > initialize. But it will have to undergo greater scrutiny because we
> > are touching a common part which might affect many stake holders. But
> > this is a discussion we can do later as part of microVM for ARM.
> >
>
> Remember you just had the architecture agnostic series merged. The vCPU
> file descriptor can be parked and picked up at a later time based on the
> vCPU index.
> This KVM vCPU create in batch mode can be well supported. What you need
> to is record the parameters passed to ioctl(vm_fd, CREATE_VCPU, index)
> and ioctl(vcpu_fd,
> INIT_CPU) for vCPU-0 in hw/arm/virt.c and create all other vCPU file
> descriptors based on the recorded parameters.
Yes, but we would need to consider many other things and we cannot decide that here.
Lets cross the bridge when it comes. I'd suggest to park this part of the discussion
for now.
Thanks
Salil.
>
> Thanks,
> Gavin
>
next prev parent reply other threads:[~2024-08-23 13:18 UTC|newest]
Thread overview: 105+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-06-13 23:36 [PATCH RFC V3 00/29] Support of Virtual CPU Hotplug for ARMv8 Arch Salil Mehta via
2024-06-13 23:36 ` [PATCH RFC V3 01/29] arm/virt, target/arm: Add new ARMCPU {socket, cluster, core, thread}-id property Salil Mehta via
2024-08-12 4:35 ` [PATCH RFC V3 01/29] arm/virt,target/arm: Add new ARMCPU {socket,cluster,core,thread}-id property Gavin Shan
2024-08-12 8:15 ` Igor Mammedov
2024-08-13 0:31 ` Gavin Shan
2024-08-19 12:07 ` Salil Mehta via
2024-08-19 11:53 ` Salil Mehta via
2024-09-04 14:42 ` zhao1.liu
2024-09-04 17:37 ` Salil Mehta via
2024-09-09 15:28 ` Zhao Liu
2024-09-10 11:01 ` Salil Mehta via
2024-09-11 11:35 ` Jonathan Cameron via
2024-09-11 12:25 ` Salil Mehta
2024-06-13 23:36 ` [PATCH RFC V3 02/29] cpu-common: Add common CPU utility for possible vCPUs Salil Mehta via
2024-07-04 3:12 ` Nicholas Piggin
2024-08-12 4:59 ` Gavin Shan
2024-08-12 5:41 ` 回复: " liu ping
2024-06-13 23:36 ` [PATCH RFC V3 03/29] hw/arm/virt: Limit number of possible vCPUs for unsupported Accel or GIC Type Salil Mehta via
2024-08-12 5:09 ` Gavin Shan
2024-06-13 23:36 ` [PATCH RFC V3 04/29] hw/arm/virt: Move setting of common CPU properties in a function Salil Mehta via
2024-08-12 5:19 ` Gavin Shan
2024-06-13 23:36 ` [PATCH RFC V3 05/29] arm/virt, target/arm: Machine init time change common to vCPU {cold|hot}-plug Salil Mehta via
2024-06-13 23:36 ` [PATCH RFC V3 06/29] arm/virt, kvm: Pre-create disabled possible vCPUs @machine init Salil Mehta via
2024-08-13 0:58 ` [PATCH RFC V3 06/29] arm/virt,kvm: " Gavin Shan
2024-08-19 5:31 ` Gavin Shan
2024-08-19 13:06 ` Salil Mehta via
2024-06-13 23:36 ` [PATCH RFC V3 07/29] arm/virt, gicv3: Changes to pre-size GIC with possible vcpus " Salil Mehta via
2024-06-13 23:36 ` [PATCH RFC V3 08/29] arm/virt: Init PMU at host for all possible vcpus Salil Mehta via
2024-07-04 3:07 ` Nicholas Piggin
2024-07-04 12:03 ` Salil Mehta via
2024-06-13 23:36 ` [PATCH RFC V3 09/29] arm/acpi: Enable ACPI support for vcpu hotplug Salil Mehta via
2024-06-13 23:36 ` [PATCH RFC V3 10/29] arm/virt: Add cpu hotplug events to GED during creation Salil Mehta via
2024-06-13 23:36 ` [PATCH RFC V3 11/29] arm/virt: Create GED dev before *disabled* CPU Objs are destroyed Salil Mehta via
2024-08-13 1:04 ` Gavin Shan
2024-08-19 12:10 ` Salil Mehta via
2024-08-20 0:22 ` Gavin Shan
2024-08-20 17:10 ` Salil Mehta via
2024-06-13 23:36 ` [PATCH RFC V3 12/29] arm/virt/acpi: Build CPUs AML with CPU Hotplug support Salil Mehta via
2024-06-13 23:36 ` [PATCH RFC V3 13/29] arm/virt: Make ARM vCPU *present* status ACPI *persistent* Salil Mehta via
2024-07-04 2:49 ` Nicholas Piggin
2024-07-04 11:23 ` Salil Mehta via
2024-07-05 0:08 ` Nicholas Piggin
2024-06-13 23:36 ` [PATCH RFC V3 14/29] hw/acpi: ACPI/AML Changes to reflect the correct _STA.{PRES, ENA} Bits to Guest Salil Mehta via
2024-06-13 23:36 ` [PATCH RFC V3 15/29] hw/arm: MADT Tbl change to size the guest with possible vCPUs Salil Mehta via
2024-06-13 23:36 ` [PATCH RFC V3 16/29] hw/acpi: Make _MAT method optional Salil Mehta via
2024-06-13 23:36 ` [PATCH RFC V3 17/29] arm/virt: Release objects for *disabled* possible vCPUs after init Salil Mehta via
2024-08-13 1:17 ` Gavin Shan
2024-08-19 12:21 ` Salil Mehta via
2024-08-20 0:05 ` Gavin Shan
2024-08-20 16:40 ` Salil Mehta via
2024-08-21 6:25 ` Gavin Shan
2024-08-21 10:23 ` Salil Mehta via
2024-08-21 13:32 ` Gavin Shan
2024-08-22 10:58 ` Salil Mehta via
2024-08-23 10:52 ` Gavin Shan
2024-08-23 13:17 ` Salil Mehta via [this message]
2024-08-24 10:03 ` Gavin Shan
2024-06-13 23:36 ` [PATCH RFC V3 18/29] arm/virt: Add/update basic hot-(un)plug framework Salil Mehta via
2024-08-13 1:21 ` Gavin Shan
2024-08-19 12:30 ` Salil Mehta via
2024-06-13 23:36 ` [PATCH RFC V3 19/29] arm/virt: Changes to (un)wire GICC<->vCPU IRQs during hot-(un)plug Salil Mehta via
2024-06-13 23:36 ` [PATCH RFC V3 20/29] hw/arm, gicv3: Changes to update GIC with vCPU hot-plug notification Salil Mehta via
2024-06-13 23:36 ` [PATCH RFC V3 21/29] hw/intc/arm-gicv3*: Changes required to (re)init the vCPU register info Salil Mehta via
2024-06-13 23:36 ` [PATCH RFC V3 22/29] arm/virt: Update the guest(via GED) about CPU hot-(un)plug events Salil Mehta via
2024-06-13 23:36 ` [PATCH RFC V3 23/29] hw/arm: Changes required for reset and to support next boot Salil Mehta via
2024-06-13 23:36 ` [PATCH RFC V3 24/29] target/arm: Add support of *unrealize* ARMCPU during vCPU Hot-unplug Salil Mehta via
2024-08-16 15:37 ` Alex Bennée
2024-08-16 15:50 ` Peter Maydell
2024-08-16 17:00 ` Peter Maydell
2024-08-19 12:59 ` Salil Mehta via
2024-08-19 13:43 ` Peter Maydell
2024-08-19 12:58 ` Salil Mehta via
2024-08-19 13:46 ` Peter Maydell
2024-08-20 15:34 ` Salil Mehta via
2024-08-19 12:35 ` Salil Mehta via
2024-08-28 20:23 ` Gustavo Romero
2024-09-04 13:53 ` Salil Mehta via
2024-06-13 23:36 ` [PATCH RFC V3 25/29] target/arm/kvm: Write CPU state back to KVM on reset Salil Mehta via
2024-07-04 3:27 ` Nicholas Piggin
2024-07-04 12:27 ` Salil Mehta via
2024-06-14 0:15 ` [PATCH RFC V3 26/29] target/arm/kvm, tcg: Register/Handle SMCCC hypercall exits to VMM/Qemu Salil Mehta via
2024-06-14 0:18 ` [PATCH RFC V3 27/29] hw/arm: Support hotplug capability check using _OSC method Salil Mehta via
2024-06-14 0:19 ` [PATCH RFC V3 28/29] tcg/mttcg: enable threads to unregister in tcg_ctxs[] Salil Mehta via
2024-06-14 0:20 ` [PATCH RFC V3 29/29] hw/arm/virt: Expose cold-booted CPUs as MADT GICC Enabled Salil Mehta via
2024-06-26 9:53 ` [PATCH RFC V3 00/29] Support of Virtual CPU Hotplug for ARMv8 Arch Vishnu Pajjuri
2024-06-26 18:01 ` Salil Mehta via
2024-07-01 11:38 ` Miguel Luis
2024-07-01 16:30 ` Salil Mehta via
2024-08-07 9:53 ` Gavin Shan
2024-08-07 13:27 ` Salil Mehta via
2024-08-07 16:07 ` Salil Mehta via
2024-08-08 5:00 ` Gavin Shan
2024-08-07 23:41 ` Gavin Shan
2024-08-07 23:48 ` Salil Mehta via
2024-08-08 0:29 ` Gavin Shan
2024-08-08 4:15 ` Gavin Shan
2024-08-08 8:39 ` Salil Mehta via
2024-08-08 8:36 ` Salil Mehta via
2024-08-28 20:35 ` Gustavo Romero
2024-08-29 9:59 ` Alex Bennée
2024-09-04 14:24 ` Salil Mehta via
2024-09-04 15:45 ` Alex Bennée
2024-09-04 15:59 ` Salil Mehta via
2024-09-06 15:06 ` Salil Mehta via
2024-09-04 14:03 ` Salil Mehta via
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=aa6bf3294acb4b3f9819c77c94af67c9@huawei.com \
--to=qemu-devel@nongnu.org \
--cc=alex.bennee@linaro.org \
--cc=andrew.jones@linux.dev \
--cc=ardb@kernel.org \
--cc=borntraeger@linux.ibm.com \
--cc=darren@os.amperecomputing.com \
--cc=david@redhat.com \
--cc=eric.auger@redhat.com \
--cc=gshan@redhat.com \
--cc=harshpb@linux.ibm.com \
--cc=ilkka@os.amperecomputing.com \
--cc=imammedo@redhat.com \
--cc=jean-philippe@linaro.org \
--cc=jiakernel2@gmail.com \
--cc=jonathan.cameron@huawei.com \
--cc=karl.heubaum@oracle.com \
--cc=linux@armlinux.org.uk \
--cc=linuxarm@huawei.com \
--cc=lixianglai@loongson.cn \
--cc=lpieralisi@kernel.org \
--cc=maobibo@loongson.cn \
--cc=maz@kernel.org \
--cc=miguel.luis@oracle.com \
--cc=mst@redhat.com \
--cc=npiggin@gmail.com \
--cc=oliver.upton@linux.dev \
--cc=pbonzini@redhat.com \
--cc=peter.maydell@linaro.org \
--cc=philmd@linaro.org \
--cc=qemu-arm@nongnu.org \
--cc=rafael@kernel.org \
--cc=richard.henderson@linaro.org \
--cc=salil.mehta@huawei.com \
--cc=salil.mehta@opnsrc.net \
--cc=shahuang@redhat.com \
--cc=vishnu@os.amperecomputing.com \
--cc=wangxiongfeng2@huawei.com \
--cc=wangyanan55@huawei.com \
--cc=will@kernel.org \
--cc=zhao1.liu@intel.com \
--cc=zhukeqian1@huawei.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).