From mboxrd@z Thu Jan 1 00:00:00 1970 From: Subject: VCPU hotplug on KVM/ARM Date: Tue, 27 Feb 2018 15:01:17 +0530 Message-ID: <000e01d3afad$b9a13830$2ce3a890$@codeaurora.org> Mime-Version: 1.0 Content-Type: multipart/mixed; boundary="===============1094332515934909775==" Return-path: Received: from localhost (localhost [127.0.0.1]) by mm01.cs.columbia.edu (Postfix) with ESMTP id 3EA0340257 for ; Tue, 27 Feb 2018 04:24:50 -0500 (EST) Received: from mm01.cs.columbia.edu ([127.0.0.1]) by localhost (mm01.cs.columbia.edu [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id sBOpmd+3B0IA for ; Tue, 27 Feb 2018 04:24:29 -0500 (EST) Received: from smtp.codeaurora.org (smtp.codeaurora.org [198.145.29.96]) by mm01.cs.columbia.edu (Postfix) with ESMTPS id 070B1401CD for ; Tue, 27 Feb 2018 04:24:29 -0500 (EST) Received: from BTHAKUR (blr-bdr-fw-01_globalnat_allzones-outside.qualcomm.com [103.229.18.19]) (using TLSv1 with cipher ECDHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) (Authenticated sender: bthakur@codeaurora.org) by smtp.codeaurora.org (Postfix) with ESMTPSA id DEB8C6024C for ; Tue, 27 Feb 2018 09:31:19 +0000 (UTC) Content-Language: en-us List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: kvmarm-bounces@lists.cs.columbia.edu Sender: kvmarm-bounces@lists.cs.columbia.edu To: kvmarm@lists.cs.columbia.edu List-Id: kvmarm@lists.cs.columbia.edu This is a multipart message in MIME format. --===============1094332515934909775== Content-Type: multipart/alternative; boundary="----=_NextPart_000_000F_01D3AFDB.D359E960" Content-Language: en-us This is a multipart message in MIME format. ------=_NextPart_000_000F_01D3AFDB.D359E960 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Hi, I hope it is the right forum to post my query. I am currently looking at the possibility of adding a new VCPU to a running guest VM in KVM/ARM. I see that currently, it is not allowed to add a new VCPU to a guest VM, if it is already initialized. The first check in kvm_arch_vcpu_create() returns failure if it is already initialized. There was some work done in QEMU to add support for VCPU hotplug: https://lists.gnu.org/archive/html/qemu-arm/2017-05/msg00404.html But I am looking at the KVM side for enabling adding a new VCPU. If you can point me to any relevant work/resources, which I can refer to then it will help me. Thanks. Regards, Bhupinder ------=_NextPart_000_000F_01D3AFDB.D359E960 Content-Type: text/html; charset="us-ascii" Content-Transfer-Encoding: quoted-printable

Hi,

 

I hope it is = the right forum to post my query.

 

I am = currently looking at the possibility of adding a new VCPU to a running = guest VM in KVM/ARM. I see that currently, it is not allowed to add a = new VCPU to a guest VM, if it is already initialized. The first check in = kvm_arch_vcpu_create() returns failure if it is already = initialized.

 

There was some work done in QEMU to add support for = VCPU hotplug: https://lists.gnu.org/archive/html/qemu-arm/2017-05/msg00404.html

 

But I am looking at the KVM side for enabling adding a = new VCPU. If you can point me to any relevant work/resources, which I = can refer to then it will help me.

 

Thanks.

 

Regards,

Bhupinder

------=_NextPart_000_000F_01D3AFDB.D359E960-- --===============1094332515934909775== Content-Type: text/plain; charset="us-ascii" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit Content-Disposition: inline _______________________________________________ kvmarm mailing list kvmarm@lists.cs.columbia.edu https://lists.cs.columbia.edu/mailman/listinfo/kvmarm --===============1094332515934909775==-- From mboxrd@z Thu Jan 1 00:00:00 1970 From: Christoffer Dall Subject: Re: VCPU hotplug on KVM/ARM Date: Tue, 27 Feb 2018 11:47:08 +0100 Message-ID: <20180227104708.GA11391@cbox> References: <000e01d3afad$b9a13830$2ce3a890$@codeaurora.org> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Return-path: Received: from localhost (localhost [127.0.0.1]) by mm01.cs.columbia.edu (Postfix) with ESMTP id 1EE0F402C7 for ; Tue, 27 Feb 2018 05:40:42 -0500 (EST) Received: from mm01.cs.columbia.edu ([127.0.0.1]) by localhost (mm01.cs.columbia.edu [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id uriKG6lTG9oR for ; Tue, 27 Feb 2018 05:40:18 -0500 (EST) Received: from mail-wm0-f53.google.com (mail-wm0-f53.google.com [74.125.82.53]) by mm01.cs.columbia.edu (Postfix) with ESMTPS id B4CB6402B8 for ; Tue, 27 Feb 2018 05:40:18 -0500 (EST) Received: by mail-wm0-f53.google.com with SMTP id w128so17181445wmw.0 for ; Tue, 27 Feb 2018 02:47:11 -0800 (PST) Content-Disposition: inline In-Reply-To: <000e01d3afad$b9a13830$2ce3a890$@codeaurora.org> List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: kvmarm-bounces@lists.cs.columbia.edu Sender: kvmarm-bounces@lists.cs.columbia.edu To: bthakur@codeaurora.org Cc: kvmarm@lists.cs.columbia.edu List-Id: kvmarm@lists.cs.columbia.edu Hi Bhupinder, On Tue, Feb 27, 2018 at 03:01:17PM +0530, bthakur@codeaurora.org wrote: > I hope it is the right forum to post my query. > > > > I am currently looking at the possibility of adding a new VCPU to a running > guest VM in KVM/ARM. I see that currently, it is not allowed to add a new > VCPU to a guest VM, if it is already initialized. The first check in > kvm_arch_vcpu_create() returns failure if it is already initialized. > This would require a major rework of a lot of logic surrounding the GIC and other parts of KVM initialization. > > > There was some work done in QEMU to add support for VCPU hotplug: > https://lists.gnu.org/archive/html/qemu-arm/2017-05/msg00404.html > > > > But I am looking at the KVM side for enabling adding a new VCPU. If you can > point me to any relevant work/resources, which I can refer to then it will > help me. > I don't have any specific pointers, but I was always told that the way we were going to do CPU hotplug would be to instantiate a large number of VCPUs, and hotplug would be equivalent to turning on a VCPU which was previously powered off. Is this not still a feasible solution? How does VCPU hotplug work on x86? Thanks, -Christoffer From mboxrd@z Thu Jan 1 00:00:00 1970 From: bthakur@codeaurora.org Subject: Re: VCPU hotplug on KVM/ARM Date: Tue, 27 Feb 2018 17:34:28 +0530 Message-ID: References: <000e01d3afad$b9a13830$2ce3a890$@codeaurora.org> <20180227104708.GA11391@cbox> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii"; Format="flowed" Content-Transfer-Encoding: 7bit Return-path: Received: from localhost (localhost [127.0.0.1]) by mm01.cs.columbia.edu (Postfix) with ESMTP id A7E2540257 for ; Tue, 27 Feb 2018 06:57:58 -0500 (EST) Received: from mm01.cs.columbia.edu ([127.0.0.1]) by localhost (mm01.cs.columbia.edu [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id anE79hWLht2h for ; Tue, 27 Feb 2018 06:57:37 -0500 (EST) Received: from smtp.codeaurora.org (smtp.codeaurora.org [198.145.29.96]) by mm01.cs.columbia.edu (Postfix) with ESMTPS id 4EC224017F for ; Tue, 27 Feb 2018 06:57:37 -0500 (EST) In-Reply-To: <20180227104708.GA11391@cbox> List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: kvmarm-bounces@lists.cs.columbia.edu Sender: kvmarm-bounces@lists.cs.columbia.edu To: Christoffer Dall Cc: Christoffer Dall , kvmarm@lists.cs.columbia.edu List-Id: kvmarm@lists.cs.columbia.edu Hi Christoffer, Thanks for your reply. On 2018-02-27 16:17, Christoffer Dall wrote: > Hi Bhupinder, > > On Tue, Feb 27, 2018 at 03:01:17PM +0530, bthakur@codeaurora.org wrote: >> I hope it is the right forum to post my query. >> >> >> >> I am currently looking at the possibility of adding a new VCPU to a >> running >> guest VM in KVM/ARM. I see that currently, it is not allowed to add a >> new >> VCPU to a guest VM, if it is already initialized. The first check in >> kvm_arch_vcpu_create() returns failure if it is already initialized. >> > > This would require a major rework of a lot of logic surrounding the GIC > and other parts of KVM initialization. > >> >> >> There was some work done in QEMU to add support for VCPU hotplug: >> https://lists.gnu.org/archive/html/qemu-arm/2017-05/msg00404.html >> >> >> >> But I am looking at the KVM side for enabling adding a new VCPU. If >> you can >> point me to any relevant work/resources, which I can refer to then it >> will >> help me. >> > > I don't have any specific pointers, but I was always told that the way > we were going to do CPU hotplug would be to instantiate a large number > of VCPUs, and hotplug would be equivalent to turning on a VCPU which > was > previously powered off. > > Is this not still a feasible solution? It should be a feasible solution provided the guest VM is not able to control the onlining/offlining of VCPUs. It should be controlled by the Host. > > How does VCPU hotplug work on x86? On x86, you can add a vcpu through libvirt setvcpu command and it shows up in the guest VM as a new CPU if you do lscpu. > > Thanks, > -Christoffer Regards, Bhupinder From mboxrd@z Thu Jan 1 00:00:00 1970 From: Christoffer Dall Subject: Re: VCPU hotplug on KVM/ARM Date: Tue, 27 Feb 2018 13:46:04 +0100 Message-ID: <20180227124604.GA2373@cbox> References: <000e01d3afad$b9a13830$2ce3a890$@codeaurora.org> <20180227104708.GA11391@cbox> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Return-path: Received: from localhost (localhost [127.0.0.1]) by mm01.cs.columbia.edu (Postfix) with ESMTP id 4B6DB4059B for ; Tue, 27 Feb 2018 07:39:35 -0500 (EST) Received: from mm01.cs.columbia.edu ([127.0.0.1]) by localhost (mm01.cs.columbia.edu [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id rn3Yfxk+rJ2a for ; Tue, 27 Feb 2018 07:39:14 -0500 (EST) Received: from mail-wm0-f49.google.com (mail-wm0-f49.google.com [74.125.82.49]) by mm01.cs.columbia.edu (Postfix) with ESMTPS id EC05140387 for ; Tue, 27 Feb 2018 07:39:13 -0500 (EST) Received: by mail-wm0-f49.google.com with SMTP id t3so23807933wmc.2 for ; Tue, 27 Feb 2018 04:46:07 -0800 (PST) Content-Disposition: inline In-Reply-To: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: kvmarm-bounces@lists.cs.columbia.edu Sender: kvmarm-bounces@lists.cs.columbia.edu To: bthakur@codeaurora.org Cc: Christoffer Dall , Christoffer Dall , kvmarm@lists.cs.columbia.edu List-Id: kvmarm@lists.cs.columbia.edu On Tue, Feb 27, 2018 at 05:34:28PM +0530, bthakur@codeaurora.org wrote: > Hi Christoffer, > > Thanks for your reply. > > On 2018-02-27 16:17, Christoffer Dall wrote: > >Hi Bhupinder, > > > >On Tue, Feb 27, 2018 at 03:01:17PM +0530, bthakur@codeaurora.org wrote: > >>I hope it is the right forum to post my query. > >> > >> > >> > >>I am currently looking at the possibility of adding a new VCPU to a > >>running > >>guest VM in KVM/ARM. I see that currently, it is not allowed to add a > >>new > >>VCPU to a guest VM, if it is already initialized. The first check in > >>kvm_arch_vcpu_create() returns failure if it is already initialized. > >> > > > >This would require a major rework of a lot of logic surrounding the GIC > >and other parts of KVM initialization. > > > >> > >> > >>There was some work done in QEMU to add support for VCPU hotplug: > >>https://lists.gnu.org/archive/html/qemu-arm/2017-05/msg00404.html > >> > >> > >> > >>But I am looking at the KVM side for enabling adding a new VCPU. If you > >>can > >>point me to any relevant work/resources, which I can refer to then it > >>will > >>help me. > >> > > > >I don't have any specific pointers, but I was always told that the way > >we were going to do CPU hotplug would be to instantiate a large number > >of VCPUs, and hotplug would be equivalent to turning on a VCPU which was > >previously powered off. > > > >Is this not still a feasible solution? > It should be a feasible solution provided the guest VM is not able to > control the onlining/offlining of VCPUs. It should be controlled by the > Host. > KVM could simply refuse to turn on some of the CPUs unless given permission from host userspace. > > > >How does VCPU hotplug work on x86? > On x86, you can add a vcpu through libvirt setvcpu command and it shows up > in the guest VM as a new CPU if you do lscpu. > Sure, but what is the mechanism, does x86 qemu actually call KVM_CREATE_VCPU, or is this also a question of turning on already created vcpus ? Thanks, -Christoffer From mboxrd@z Thu Jan 1 00:00:00 1970 From: Andrew Jones Subject: Re: VCPU hotplug on KVM/ARM Date: Tue, 27 Feb 2018 14:21:31 +0100 Message-ID: <20180227132131.fipafmnb56a7fj76@kamzik.brq.redhat.com> References: <000e01d3afad$b9a13830$2ce3a890$@codeaurora.org> <20180227104708.GA11391@cbox> <20180227124604.GA2373@cbox> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Return-path: Received: from localhost (localhost [127.0.0.1]) by mm01.cs.columbia.edu (Postfix) with ESMTP id CA1C040257 for ; Tue, 27 Feb 2018 08:15:13 -0500 (EST) Received: from mm01.cs.columbia.edu ([127.0.0.1]) by localhost (mm01.cs.columbia.edu [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id fiMPRccqtpAD for ; Tue, 27 Feb 2018 08:14:48 -0500 (EST) Received: from mx1.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by mm01.cs.columbia.edu (Postfix) with ESMTPS id 38EB84047F for ; Tue, 27 Feb 2018 08:14:48 -0500 (EST) Content-Disposition: inline In-Reply-To: <20180227124604.GA2373@cbox> List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: kvmarm-bounces@lists.cs.columbia.edu Sender: kvmarm-bounces@lists.cs.columbia.edu To: Christoffer Dall Cc: Christoffer Dall , qemu-devel@nongnu.org, Christoffer Dall , qemu-arm@nongnu.org, imammedo@redhat.com, kvmarm@lists.cs.columbia.edu List-Id: kvmarm@lists.cs.columbia.edu On Tue, Feb 27, 2018 at 01:46:04PM +0100, Christoffer Dall wrote: > On Tue, Feb 27, 2018 at 05:34:28PM +0530, bthakur@codeaurora.org wrote: > > Hi Christoffer, > > > > Thanks for your reply. > > > > On 2018-02-27 16:17, Christoffer Dall wrote: > > >Hi Bhupinder, > > > > > >On Tue, Feb 27, 2018 at 03:01:17PM +0530, bthakur@codeaurora.org wrote: > > >>I hope it is the right forum to post my query. > > >> > > >> > > >> > > >>I am currently looking at the possibility of adding a new VCPU to a > > >>running > > >>guest VM in KVM/ARM. I see that currently, it is not allowed to add a > > >>new > > >>VCPU to a guest VM, if it is already initialized. The first check in > > >>kvm_arch_vcpu_create() returns failure if it is already initialized. > > >> > > > > > >This would require a major rework of a lot of logic surrounding the GIC > > >and other parts of KVM initialization. > > > > > >> > > >> > > >>There was some work done in QEMU to add support for VCPU hotplug: > > >>https://lists.gnu.org/archive/html/qemu-arm/2017-05/msg00404.html > > >> > > >> > > >> > > >>But I am looking at the KVM side for enabling adding a new VCPU. If you > > >>can > > >>point me to any relevant work/resources, which I can refer to then it > > >>will > > >>help me. > > >> > > > > > >I don't have any specific pointers, but I was always told that the way > > >we were going to do CPU hotplug would be to instantiate a large number > > >of VCPUs, and hotplug would be equivalent to turning on a VCPU which was > > >previously powered off. > > > > > >Is this not still a feasible solution? > > It should be a feasible solution provided the guest VM is not able to > > control the onlining/offlining of VCPUs. It should be controlled by the > > Host. > > > > KVM could simply refuse to turn on some of the CPUs unless given > permission from host userspace. > > > > > > >How does VCPU hotplug work on x86? > > On x86, you can add a vcpu through libvirt setvcpu command and it shows up > > in the guest VM as a new CPU if you do lscpu. > > > > Sure, but what is the mechanism, does x86 qemu actually call > KVM_CREATE_VCPU, or is this also a question of turning on already > created vcpus ? > CC'ing Igor and qemu-devel drew From mboxrd@z Thu Jan 1 00:00:00 1970 From: Igor Mammedov Subject: Re: [Qemu-devel] VCPU hotplug on KVM/ARM Date: Thu, 1 Mar 2018 10:50:30 +0100 Message-ID: <20180301105030.2b716e68@redhat.com> References: <000e01d3afad$b9a13830$2ce3a890$@codeaurora.org> <20180227104708.GA11391@cbox> <20180227124604.GA2373@cbox> <20180227132131.fipafmnb56a7fj76@kamzik.brq.redhat.com> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Return-path: Received: from localhost (localhost [127.0.0.1]) by mm01.cs.columbia.edu (Postfix) with ESMTP id 75C6F49E00 for ; Thu, 1 Mar 2018 04:44:04 -0500 (EST) Received: from mm01.cs.columbia.edu ([127.0.0.1]) by localhost (mm01.cs.columbia.edu [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id Huqq4Z1R6DbD for ; Thu, 1 Mar 2018 04:43:42 -0500 (EST) Received: from mx1.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by mm01.cs.columbia.edu (Postfix) with ESMTPS id 4BA5642121 for ; Thu, 1 Mar 2018 04:43:41 -0500 (EST) In-Reply-To: <20180227132131.fipafmnb56a7fj76@kamzik.brq.redhat.com> List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: kvmarm-bounces@lists.cs.columbia.edu Sender: kvmarm-bounces@lists.cs.columbia.edu To: Andrew Jones Cc: Christoffer Dall , qemu-devel@nongnu.org, Christoffer Dall , qemu-arm@nongnu.org, kvmarm@lists.cs.columbia.edu List-Id: kvmarm@lists.cs.columbia.edu On Tue, 27 Feb 2018 14:21:31 +0100 Andrew Jones wrote: > On Tue, Feb 27, 2018 at 01:46:04PM +0100, Christoffer Dall wrote: > > On Tue, Feb 27, 2018 at 05:34:28PM +0530, bthakur@codeaurora.org wrote: > > > Hi Christoffer, > > > > > > Thanks for your reply. > > > > > > On 2018-02-27 16:17, Christoffer Dall wrote: > > > >Hi Bhupinder, > > > > > > > >On Tue, Feb 27, 2018 at 03:01:17PM +0530, bthakur@codeaurora.org wrote: > > > >>I hope it is the right forum to post my query. > > > >> > > > >> > > > >> > > > >>I am currently looking at the possibility of adding a new VCPU to a > > > >>running > > > >>guest VM in KVM/ARM. I see that currently, it is not allowed to add a > > > >>new > > > >>VCPU to a guest VM, if it is already initialized. The first check in > > > >>kvm_arch_vcpu_create() returns failure if it is already initialized. > > > >> > > > > > > > >This would require a major rework of a lot of logic surrounding the GIC > > > >and other parts of KVM initialization. > > > > > > > >> > > > >> > > > >>There was some work done in QEMU to add support for VCPU hotplug: > > > >>https://lists.gnu.org/archive/html/qemu-arm/2017-05/msg00404.html > > > >> > > > >> > > > >> > > > >>But I am looking at the KVM side for enabling adding a new VCPU. If you > > > >>can > > > >>point me to any relevant work/resources, which I can refer to then it > > > >>will > > > >>help me. > > > >> > > > > > > > >I don't have any specific pointers, but I was always told that the way > > > >we were going to do CPU hotplug would be to instantiate a large number > > > >of VCPUs, and hotplug would be equivalent to turning on a VCPU which was > > > >previously powered off. > > > > > > > >Is this not still a feasible solution? > > > It should be a feasible solution provided the guest VM is not able to > > > control the onlining/offlining of VCPUs. It should be controlled by the > > > Host. > > > > > > > KVM could simply refuse to turn on some of the CPUs unless given > > permission from host userspace. > > > > > > > > > >How does VCPU hotplug work on x86? > > > On x86, you can add a vcpu through libvirt setvcpu command and it shows up > > > in the guest VM as a new CPU if you do lscpu. > > > > > > > Sure, but what is the mechanism, does x86 qemu actually call > > KVM_CREATE_VCPU, or is this also a question of turning on already > > created vcpus ? In QEMU on x86 (and I think ppc, s390 as well), we create vCPUs on demand. It would be nice if ARM would be able to do that too, so that it could take advantage of the same code. > CC'ing Igor and qemu-devel > > drew > From mboxrd@z Thu Jan 1 00:00:00 1970 From: Peter Maydell Subject: Re: [Qemu-arm] [Qemu-devel] VCPU hotplug on KVM/ARM Date: Thu, 1 Mar 2018 10:05:09 +0000 Message-ID: References: <000e01d3afad$b9a13830$2ce3a890$@codeaurora.org> <20180227104708.GA11391@cbox> <20180227124604.GA2373@cbox> <20180227132131.fipafmnb56a7fj76@kamzik.brq.redhat.com> <20180301105030.2b716e68@redhat.com> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Return-path: Received: from localhost (localhost [127.0.0.1]) by mm01.cs.columbia.edu (Postfix) with ESMTP id 6DB9842121 for ; Thu, 1 Mar 2018 04:58:53 -0500 (EST) Received: from mm01.cs.columbia.edu ([127.0.0.1]) by localhost (mm01.cs.columbia.edu [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id 4kVJeb60c1my for ; Thu, 1 Mar 2018 04:58:32 -0500 (EST) Received: from mail-oi0-f53.google.com (mail-oi0-f53.google.com [209.85.218.53]) by mm01.cs.columbia.edu (Postfix) with ESMTPS id 5964549E00 for ; Thu, 1 Mar 2018 04:58:32 -0500 (EST) Received: by mail-oi0-f53.google.com with SMTP id c18so4046606oiy.9 for ; Thu, 01 Mar 2018 02:05:30 -0800 (PST) In-Reply-To: <20180301105030.2b716e68@redhat.com> List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: kvmarm-bounces@lists.cs.columbia.edu Sender: kvmarm-bounces@lists.cs.columbia.edu To: Igor Mammedov Cc: Christoffer Dall , QEMU Developers , Christoffer Dall , qemu-arm , kvmarm@lists.cs.columbia.edu List-Id: kvmarm@lists.cs.columbia.edu On 1 March 2018 at 09:50, Igor Mammedov wrote: > In QEMU on x86 (and I think ppc, s390 as well), we create vCPUs on demand. > > It would be nice if ARM would be able to do that too, > so that it could take advantage of the same code. It's not clear to me how that would work, given that for instance the interrupt controller wants to know up-front how many CPUs it has to deal with. thanks -- PMM From mboxrd@z Thu Jan 1 00:00:00 1970 From: David Hildenbrand Subject: Re: [Qemu-devel] [Qemu-arm] VCPU hotplug on KVM/ARM Date: Thu, 1 Mar 2018 14:32:39 +0100 Message-ID: References: <000e01d3afad$b9a13830$2ce3a890$@codeaurora.org> <20180227104708.GA11391@cbox> <20180227124604.GA2373@cbox> <20180227132131.fipafmnb56a7fj76@kamzik.brq.redhat.com> <20180301105030.2b716e68@redhat.com> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Return-path: Received: from localhost (localhost [127.0.0.1]) by mm01.cs.columbia.edu (Postfix) with ESMTP id 0B9DF402D3 for ; Thu, 1 Mar 2018 08:26:09 -0500 (EST) Received: from mm01.cs.columbia.edu ([127.0.0.1]) by localhost (mm01.cs.columbia.edu [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id vz+5W1+VvrcG for ; Thu, 1 Mar 2018 08:25:42 -0500 (EST) Received: from mx1.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by mm01.cs.columbia.edu (Postfix) with ESMTPS id D6D7D402C7 for ; Thu, 1 Mar 2018 08:25:42 -0500 (EST) In-Reply-To: Content-Language: en-US List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: kvmarm-bounces@lists.cs.columbia.edu Sender: kvmarm-bounces@lists.cs.columbia.edu To: Peter Maydell , Igor Mammedov Cc: Christoffer Dall , QEMU Developers , Christoffer Dall , qemu-arm , kvmarm@lists.cs.columbia.edu List-Id: kvmarm@lists.cs.columbia.edu On 01.03.2018 11:05, Peter Maydell wrote: > On 1 March 2018 at 09:50, Igor Mammedov wrote: >> In QEMU on x86 (and I think ppc, s390 as well), we create vCPUs on demand.>> >> It would be nice if ARM would be able to do that too, >> so that it could take advantage of the same code. > > It's not clear to me how that would work, given that for > instance the interrupt controller wants to know up-front > how many CPUs it has to deal with. > So how is cpu hotplug handled in HW? Or doesn't it even exist there? (we have max_cpus for the interrupt controller, but not sure if that is what we want) > thanks > -- PMM > -- Thanks, David / dhildenb From mboxrd@z Thu Jan 1 00:00:00 1970 From: Marc Zyngier Subject: Re: [Qemu-devel] [Qemu-arm] VCPU hotplug on KVM/ARM Date: Wed, 7 Mar 2018 12:47:39 +0000 Message-ID: References: <000e01d3afad$b9a13830$2ce3a890$@codeaurora.org> <20180227104708.GA11391@cbox> <20180227124604.GA2373@cbox> <20180227132131.fipafmnb56a7fj76@kamzik.brq.redhat.com> <20180301105030.2b716e68@redhat.com> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Return-path: Received: from localhost (localhost [127.0.0.1]) by mm01.cs.columbia.edu (Postfix) with ESMTP id E8F0C40259 for ; Wed, 7 Mar 2018 07:40:49 -0500 (EST) Received: from mm01.cs.columbia.edu ([127.0.0.1]) by localhost (mm01.cs.columbia.edu [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id 98b13yh4wbvQ for ; Wed, 7 Mar 2018 07:40:28 -0500 (EST) Received: from foss.arm.com (foss.arm.com [217.140.101.70]) by mm01.cs.columbia.edu (Postfix) with ESMTP id 8B49D401AB for ; Wed, 7 Mar 2018 07:40:28 -0500 (EST) In-Reply-To: Content-Language: en-GB List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: kvmarm-bounces@lists.cs.columbia.edu Sender: kvmarm-bounces@lists.cs.columbia.edu To: David Hildenbrand , Peter Maydell , Igor Mammedov Cc: Christoffer Dall , Christoffer Dall , QEMU Developers , qemu-arm , kvmarm@lists.cs.columbia.edu List-Id: kvmarm@lists.cs.columbia.edu On 01/03/18 13:32, David Hildenbrand wrote: > On 01.03.2018 11:05, Peter Maydell wrote: >> On 1 March 2018 at 09:50, Igor Mammedov wrote: >>> In QEMU on x86 (and I think ppc, s390 as well), we create vCPUs on demand.>> >>> It would be nice if ARM would be able to do that too, >>> so that it could take advantage of the same code. >> >> It's not clear to me how that would work, given that for >> instance the interrupt controller wants to know up-front >> how many CPUs it has to deal with. >> > > So how is cpu hotplug handled in HW? Or doesn't it even exist there? I don't know of any physical system offering that facility. > (we have max_cpus for the interrupt controller, but not sure if that is > what we want) We'd need something along those lines. Each CPU has a notional point to point link to the interrupt controller (to the redistributor, to be precise), and this entity must pre-exist. Thanks, M. -- Jazz is not dead. It just smells funny... From mboxrd@z Thu Jan 1 00:00:00 1970 From: Maran Wilson Subject: Re: VCPU hotplug on KVM/ARM Date: Tue, 24 Jul 2018 11:35:31 -0700 Message-ID: References: <000e01d3afad$b9a13830$2ce3a890$@codeaurora.org> <20180227104708.GA11391@cbox> <20180227124604.GA2373@cbox> <20180227132131.fipafmnb56a7fj76@kamzik.brq.redhat.com> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii"; Format="flowed" Content-Transfer-Encoding: 7bit Return-path: Received: from localhost (localhost [127.0.0.1]) by mm01.cs.columbia.edu (Postfix) with ESMTP id 7BC7340493 for ; Tue, 24 Jul 2018 16:37:02 -0400 (EDT) Received: from mm01.cs.columbia.edu ([127.0.0.1]) by localhost (mm01.cs.columbia.edu [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id 0jAKfkzNTKk1 for ; Tue, 24 Jul 2018 16:37:01 -0400 (EDT) Received: from userp2120.oracle.com (userp2120.oracle.com [156.151.31.85]) by mm01.cs.columbia.edu (Postfix) with ESMTPS id 2E96140184 for ; Tue, 24 Jul 2018 16:37:01 -0400 (EDT) In-Reply-To: <20180227132131.fipafmnb56a7fj76@kamzik.brq.redhat.com> Content-Language: en-US List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: kvmarm-bounces@lists.cs.columbia.edu Sender: kvmarm-bounces@lists.cs.columbia.edu To: kvmarm@lists.cs.columbia.edu, bthakur@codeaurora.org Cc: Christoffer Dall , Christoffer Dall , marc.zyngier@arm.com, david@redhat.com, qemu-devel@nongnu.org, Christoffer Dall , qemu-arm@nongnu.org, imammedo@redhat.com List-Id: kvmarm@lists.cs.columbia.edu It's been a few months since this email thread died off. Has anyone started working on a potential solution that would allow VCPU hotplug on KVM/ARM ? Or is this a project that is still waiting for an owner who has the time and inclination to get started? Thanks, -Maran On 2/27/2018 5:21 AM, Andrew Jones wrote: > On Tue, Feb 27, 2018 at 01:46:04PM +0100, Christoffer Dall wrote: >> On Tue, Feb 27, 2018 at 05:34:28PM +0530, bthakur@codeaurora.org wrote: >>> Hi Christoffer, >>> >>> Thanks for your reply. >>> >>> On 2018-02-27 16:17, Christoffer Dall wrote: >>>> Hi Bhupinder, >>>> >>>> On Tue, Feb 27, 2018 at 03:01:17PM +0530, bthakur@codeaurora.org wrote: >>>>> I hope it is the right forum to post my query. >>>>> >>>>> >>>>> >>>>> I am currently looking at the possibility of adding a new VCPU to a >>>>> running >>>>> guest VM in KVM/ARM. I see that currently, it is not allowed to add a >>>>> new >>>>> VCPU to a guest VM, if it is already initialized. The first check in >>>>> kvm_arch_vcpu_create() returns failure if it is already initialized. >>>>> >>>> This would require a major rework of a lot of logic surrounding the GIC >>>> and other parts of KVM initialization. >>>> >>>>> >>>>> There was some work done in QEMU to add support for VCPU hotplug: >>>>> https://lists.gnu.org/archive/html/qemu-arm/2017-05/msg00404.html >>>>> >>>>> >>>>> >>>>> But I am looking at the KVM side for enabling adding a new VCPU. If you >>>>> can >>>>> point me to any relevant work/resources, which I can refer to then it >>>>> will >>>>> help me. >>>>> >>>> I don't have any specific pointers, but I was always told that the way >>>> we were going to do CPU hotplug would be to instantiate a large number >>>> of VCPUs, and hotplug would be equivalent to turning on a VCPU which was >>>> previously powered off. >>>> >>>> Is this not still a feasible solution? >>> It should be a feasible solution provided the guest VM is not able to >>> control the onlining/offlining of VCPUs. It should be controlled by the >>> Host. >>> >> KVM could simply refuse to turn on some of the CPUs unless given >> permission from host userspace. >> >>>> How does VCPU hotplug work on x86? >>> On x86, you can add a vcpu through libvirt setvcpu command and it shows up >>> in the guest VM as a new CPU if you do lscpu. >>> >> Sure, but what is the mechanism, does x86 qemu actually call >> KVM_CREATE_VCPU, or is this also a question of turning on already >> created vcpus ? >> > CC'ing Igor and qemu-devel > > drew > _______________________________________________ > kvmarm mailing list > kvmarm@lists.cs.columbia.edu > https://lists.cs.columbia.edu/mailman/listinfo/kvmarm From mboxrd@z Thu Jan 1 00:00:00 1970 From: Igor Mammedov Subject: Re: [Qemu-devel] VCPU hotplug on KVM/ARM Date: Wed, 25 Jul 2018 08:45:02 +0200 Message-ID: <20180725084502.751106ed@redhat.com> References: <000e01d3afad$b9a13830$2ce3a890$@codeaurora.org> <20180227104708.GA11391@cbox> <20180227124604.GA2373@cbox> <20180227132131.fipafmnb56a7fj76@kamzik.brq.redhat.com> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Return-path: Received: from localhost (localhost [127.0.0.1]) by mm01.cs.columbia.edu (Postfix) with ESMTP id 67305404D9 for ; Wed, 25 Jul 2018 02:45:07 -0400 (EDT) Received: from mm01.cs.columbia.edu ([127.0.0.1]) by localhost (mm01.cs.columbia.edu [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id 5q+wGqFGTwJj for ; Wed, 25 Jul 2018 02:45:06 -0400 (EDT) Received: from mx1.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by mm01.cs.columbia.edu (Postfix) with ESMTPS id 1C74540493 for ; Wed, 25 Jul 2018 02:45:06 -0400 (EDT) In-Reply-To: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: kvmarm-bounces@lists.cs.columbia.edu Sender: kvmarm-bounces@lists.cs.columbia.edu To: Maran Wilson Cc: Christoffer Dall , Christoffer Dall , marc.zyngier@arm.com, david@redhat.com, qemu-devel@nongnu.org, Christoffer Dall , qemu-arm@nongnu.org, kvmarm@lists.cs.columbia.edu List-Id: kvmarm@lists.cs.columbia.edu On Tue, 24 Jul 2018 11:35:31 -0700 Maran Wilson wrote: > It's been a few months since this email thread died off. Has anyone > started working on a potential solution that would allow VCPU hotplug on > KVM/ARM ? Or is this a project that is still waiting for an owner who > has the time and inclination to get started? I'm working on QEMU side of it as time allows. I can guide or share task if are you interested in helping out with it? > Thanks, > -Maran > > On 2/27/2018 5:21 AM, Andrew Jones wrote: > > On Tue, Feb 27, 2018 at 01:46:04PM +0100, Christoffer Dall wrote: > >> On Tue, Feb 27, 2018 at 05:34:28PM +0530, bthakur@codeaurora.org wrote: > >>> Hi Christoffer, > >>> > >>> Thanks for your reply. > >>> > >>> On 2018-02-27 16:17, Christoffer Dall wrote: > >>>> Hi Bhupinder, > >>>> > >>>> On Tue, Feb 27, 2018 at 03:01:17PM +0530, bthakur@codeaurora.org wrote: > >>>>> I hope it is the right forum to post my query. > >>>>> > >>>>> > >>>>> > >>>>> I am currently looking at the possibility of adding a new VCPU to a > >>>>> running > >>>>> guest VM in KVM/ARM. I see that currently, it is not allowed to add a > >>>>> new > >>>>> VCPU to a guest VM, if it is already initialized. The first check in > >>>>> kvm_arch_vcpu_create() returns failure if it is already initialized. > >>>>> > >>>> This would require a major rework of a lot of logic surrounding the GIC > >>>> and other parts of KVM initialization. > >>>> > >>>>> > >>>>> There was some work done in QEMU to add support for VCPU hotplug: > >>>>> https://lists.gnu.org/archive/html/qemu-arm/2017-05/msg00404.html > >>>>> > >>>>> > >>>>> > >>>>> But I am looking at the KVM side for enabling adding a new VCPU. If you > >>>>> can > >>>>> point me to any relevant work/resources, which I can refer to then it > >>>>> will > >>>>> help me. > >>>>> > >>>> I don't have any specific pointers, but I was always told that the way > >>>> we were going to do CPU hotplug would be to instantiate a large number > >>>> of VCPUs, and hotplug would be equivalent to turning on a VCPU which was > >>>> previously powered off. > >>>> > >>>> Is this not still a feasible solution? > >>> It should be a feasible solution provided the guest VM is not able to > >>> control the onlining/offlining of VCPUs. It should be controlled by the > >>> Host. > >>> > >> KVM could simply refuse to turn on some of the CPUs unless given > >> permission from host userspace. > >> > >>>> How does VCPU hotplug work on x86? > >>> On x86, you can add a vcpu through libvirt setvcpu command and it shows up > >>> in the guest VM as a new CPU if you do lscpu. > >>> > >> Sure, but what is the mechanism, does x86 qemu actually call > >> KVM_CREATE_VCPU, or is this also a question of turning on already > >> created vcpus ? > >> > > CC'ing Igor and qemu-devel > > > > drew > > _______________________________________________ > > kvmarm mailing list > > kvmarm@lists.cs.columbia.edu > > https://lists.cs.columbia.edu/mailman/listinfo/kvmarm > > From mboxrd@z Thu Jan 1 00:00:00 1970 From: Marc Zyngier Subject: Re: VCPU hotplug on KVM/ARM Date: Wed, 25 Jul 2018 11:40:54 +0100 Message-ID: <74427c65-b860-d576-04f9-766253285210@arm.com> References: <000e01d3afad$b9a13830$2ce3a890$@codeaurora.org> <20180227104708.GA11391@cbox> <20180227124604.GA2373@cbox> <20180227132131.fipafmnb56a7fj76@kamzik.brq.redhat.com> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Return-path: Received: from localhost (localhost [127.0.0.1]) by mm01.cs.columbia.edu (Postfix) with ESMTP id DC4B140A6E for ; Wed, 25 Jul 2018 06:40:59 -0400 (EDT) Received: from mm01.cs.columbia.edu ([127.0.0.1]) by localhost (mm01.cs.columbia.edu [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id jLDfAF2eLItO for ; Wed, 25 Jul 2018 06:40:58 -0400 (EDT) Received: from foss.arm.com (usa-sjc-mx-foss1.foss.arm.com [217.140.101.70]) by mm01.cs.columbia.edu (Postfix) with ESMTP id B8BE54045F for ; Wed, 25 Jul 2018 06:40:58 -0400 (EDT) In-Reply-To: Content-Language: en-GB List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: kvmarm-bounces@lists.cs.columbia.edu Sender: kvmarm-bounces@lists.cs.columbia.edu To: Maran Wilson , kvmarm@lists.cs.columbia.edu, bthakur@codeaurora.org Cc: Christoffer Dall , Christoffer Dall , david@redhat.com, qemu-devel@nongnu.org, Christoffer Dall , qemu-arm@nongnu.org, imammedo@redhat.com List-Id: kvmarm@lists.cs.columbia.edu On 24/07/18 19:35, Maran Wilson wrote: > It's been a few months since this email thread died off. Has anyone > started working on a potential solution that would allow VCPU hotplug on > KVM/ARM ? Or is this a project that is still waiting for an owner who > has the time and inclination to get started? This is typically a project for someone who would have this particular itch to scratch, and who has a demonstrable need for this functionality. Work wise, it would have to include adding physical CPU hotplug support to the arm64 kernel as a precondition, before worrying about doing it in KVM. For KVM itself, particular area of interests would be: - Making GICv3 redistributors magically appear in the IPA space - Live resizing of GICv3 structures - Dynamic allocation of MPIDR, and mapping with vcpu_id This should keep someone busy for a good couple of weeks (give or take a few months). That being said, I'd rather see support in QEMU first, creating all the vcpu/redistributors upfront, and signalling the hotplug event via the virtual firmware. And then post some numbers to show that creating all the vcpus upfront is not acceptable. Thanks, M. -- Jazz is not dead. It just smells funny... From mboxrd@z Thu Jan 1 00:00:00 1970 From: Andrew Jones Subject: Re: [Qemu-devel] VCPU hotplug on KVM/ARM Date: Wed, 25 Jul 2018 14:28:06 +0200 Message-ID: <20180725122806.g2gpvdbrbdkriprg@kamzik.brq.redhat.com> References: <000e01d3afad$b9a13830$2ce3a890$@codeaurora.org> <20180227104708.GA11391@cbox> <20180227124604.GA2373@cbox> <20180227132131.fipafmnb56a7fj76@kamzik.brq.redhat.com> <74427c65-b860-d576-04f9-766253285210@arm.com> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Return-path: Received: from localhost (localhost [127.0.0.1]) by mm01.cs.columbia.edu (Postfix) with ESMTP id 40D3A40A67 for ; Wed, 25 Jul 2018 08:28:15 -0400 (EDT) Received: from mm01.cs.columbia.edu ([127.0.0.1]) by localhost (mm01.cs.columbia.edu [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id Bi3ZYrgUhIIT for ; Wed, 25 Jul 2018 08:28:14 -0400 (EDT) Received: from mx1.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by mm01.cs.columbia.edu (Postfix) with ESMTPS id 2DFF04045F for ; Wed, 25 Jul 2018 08:28:14 -0400 (EDT) Content-Disposition: inline In-Reply-To: <74427c65-b860-d576-04f9-766253285210@arm.com> List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: kvmarm-bounces@lists.cs.columbia.edu Sender: kvmarm-bounces@lists.cs.columbia.edu To: Marc Zyngier Cc: Christoffer Dall , Christoffer Dall , david@redhat.com, qemu-devel@nongnu.org, Christoffer Dall , qemu-arm@nongnu.org, imammedo@redhat.com, kvmarm@lists.cs.columbia.edu List-Id: kvmarm@lists.cs.columbia.edu On Wed, Jul 25, 2018 at 11:40:54AM +0100, Marc Zyngier wrote: > On 24/07/18 19:35, Maran Wilson wrote: > > It's been a few months since this email thread died off. Has anyone > > started working on a potential solution that would allow VCPU hotplug on > > KVM/ARM ? Or is this a project that is still waiting for an owner who > > has the time and inclination to get started? > > This is typically a project for someone who would have this particular > itch to scratch, and who has a demonstrable need for this functionality. > > Work wise, it would have to include adding physical CPU hotplug support > to the arm64 kernel as a precondition, before worrying about doing it in > KVM. > > For KVM itself, particular area of interests would be: > - Making GICv3 redistributors magically appear in the IPA space > - Live resizing of GICv3 structures > - Dynamic allocation of MPIDR, and mapping with vcpu_id I have CPU topology description patches on the QEMU list now[*]. A next step for me is to this MPIDR work. I probably won't get to it until the end of August though. [*] http://lists.nongnu.org/archive/html/qemu-devel/2018-07/msg01168.html > > This should keep someone busy for a good couple of weeks (give or take a > few months). :-) > > That being said, I'd rather see support in QEMU first, creating all the > vcpu/redistributors upfront, and signalling the hotplug event via the > virtual firmware. And then post some numbers to show that creating all > the vcpus upfront is not acceptable. I think the upfront allocation, allocating all possible cpus, but only activating all present cpus, was the planned approach. What were the concerns about that approach? Just vcpu memory overhead for too many overly ambitious VM configs? Thanks, drew From mboxrd@z Thu Jan 1 00:00:00 1970 From: Marc Zyngier Subject: Re: [Qemu-devel] VCPU hotplug on KVM/ARM Date: Wed, 25 Jul 2018 14:07:12 +0100 Message-ID: <202a2c63-1a3e-7f01-850c-4fb5e48f43e7@arm.com> References: <000e01d3afad$b9a13830$2ce3a890$@codeaurora.org> <20180227104708.GA11391@cbox> <20180227124604.GA2373@cbox> <20180227132131.fipafmnb56a7fj76@kamzik.brq.redhat.com> <74427c65-b860-d576-04f9-766253285210@arm.com> <20180725122806.g2gpvdbrbdkriprg@kamzik.brq.redhat.com> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Return-path: Received: from localhost (localhost [127.0.0.1]) by mm01.cs.columbia.edu (Postfix) with ESMTP id BF37B4A0D2 for ; Wed, 25 Jul 2018 09:07:18 -0400 (EDT) Received: from mm01.cs.columbia.edu ([127.0.0.1]) by localhost (mm01.cs.columbia.edu [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id AIpJtuzAHgbk for ; Wed, 25 Jul 2018 09:07:16 -0400 (EDT) Received: from foss.arm.com (usa-sjc-mx-foss1.foss.arm.com [217.140.101.70]) by mm01.cs.columbia.edu (Postfix) with ESMTP id 6136140493 for ; Wed, 25 Jul 2018 09:07:16 -0400 (EDT) In-Reply-To: <20180725122806.g2gpvdbrbdkriprg@kamzik.brq.redhat.com> Content-Language: en-GB List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: kvmarm-bounces@lists.cs.columbia.edu Sender: kvmarm-bounces@lists.cs.columbia.edu To: Andrew Jones Cc: Christoffer Dall , Christoffer Dall , david@redhat.com, qemu-devel@nongnu.org, Christoffer Dall , qemu-arm@nongnu.org, imammedo@redhat.com, kvmarm@lists.cs.columbia.edu List-Id: kvmarm@lists.cs.columbia.edu On 25/07/18 13:28, Andrew Jones wrote: > On Wed, Jul 25, 2018 at 11:40:54AM +0100, Marc Zyngier wrote: >> On 24/07/18 19:35, Maran Wilson wrote: >>> It's been a few months since this email thread died off. Has anyone >>> started working on a potential solution that would allow VCPU hotplug on >>> KVM/ARM ? Or is this a project that is still waiting for an owner who >>> has the time and inclination to get started? >> >> This is typically a project for someone who would have this particular >> itch to scratch, and who has a demonstrable need for this functionality. >> >> Work wise, it would have to include adding physical CPU hotplug support >> to the arm64 kernel as a precondition, before worrying about doing it in >> KVM. >> >> For KVM itself, particular area of interests would be: >> - Making GICv3 redistributors magically appear in the IPA space >> - Live resizing of GICv3 structures >> - Dynamic allocation of MPIDR, and mapping with vcpu_id > > I have CPU topology description patches on the QEMU list now[*]. A next > step for me is to this MPIDR work. I probably won't get to it until the > end of August though. > > [*] http://lists.nongnu.org/archive/html/qemu-devel/2018-07/msg01168.html > >> >> This should keep someone busy for a good couple of weeks (give or take a >> few months). > > :-) > >> >> That being said, I'd rather see support in QEMU first, creating all the >> vcpu/redistributors upfront, and signalling the hotplug event via the >> virtual firmware. And then post some numbers to show that creating all >> the vcpus upfront is not acceptable. > > I think the upfront allocation, allocating all possible cpus, but only > activating all present cpus, was the planned approach. What were the > concerns about that approach? Just vcpu memory overhead for too many > overly ambitious VM configs? I don't have any ARM-specific concern about that, and I think this is the right approach. It has the good property of not requiring much change in the kernel (other than actually supporting CPU hotplug). vcpu memory overhead is a generic concern though, and not only for ARM. We currently allow up to 512 vcpus per VM, which looks like a lot, but really isn't. If we're to allow this to be bumped up significantly, we should start accounting the vcpu-related memory against the user's allowance... Thanks, M. -- Jazz is not dead. It just smells funny... From mboxrd@z Thu Jan 1 00:00:00 1970 From: Maran Wilson Subject: Re: [Qemu-devel] VCPU hotplug on KVM/ARM Date: Wed, 25 Jul 2018 10:26:05 -0700 Message-ID: References: <000e01d3afad$b9a13830$2ce3a890$@codeaurora.org> <20180227104708.GA11391@cbox> <20180227124604.GA2373@cbox> <20180227132131.fipafmnb56a7fj76@kamzik.brq.redhat.com> <74427c65-b860-d576-04f9-766253285210@arm.com> <20180725122806.g2gpvdbrbdkriprg@kamzik.brq.redhat.com> <202a2c63-1a3e-7f01-850c-4fb5e48f43e7@arm.com> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii"; Format="flowed" Content-Transfer-Encoding: 7bit Return-path: Received: from localhost (localhost [127.0.0.1]) by mm01.cs.columbia.edu (Postfix) with ESMTP id 1BDDC4A0F9 for ; Wed, 25 Jul 2018 13:26:28 -0400 (EDT) Received: from mm01.cs.columbia.edu ([127.0.0.1]) by localhost (mm01.cs.columbia.edu [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id o5vTD3PndU7h for ; Wed, 25 Jul 2018 13:26:27 -0400 (EDT) Received: from aserp2120.oracle.com (aserp2120.oracle.com [141.146.126.78]) by mm01.cs.columbia.edu (Postfix) with ESMTPS id DB47C404D9 for ; Wed, 25 Jul 2018 13:26:26 -0400 (EDT) In-Reply-To: <202a2c63-1a3e-7f01-850c-4fb5e48f43e7@arm.com> Content-Language: en-US List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: kvmarm-bounces@lists.cs.columbia.edu Sender: kvmarm-bounces@lists.cs.columbia.edu To: Marc Zyngier , Andrew Jones , imammedo@redhat.com, kvmarm@lists.cs.columbia.edu Cc: Christoffer Dall , Christoffer Dall , david@redhat.com, qemu-devel@nongnu.org, Christoffer Dall , qemu-arm@nongnu.org List-Id: kvmarm@lists.cs.columbia.edu Thanks everyone. It sounds like there is consensus around how best to proceed (at a high level at least). Since Igor has already gotten started, I'll coordinate with him offline to see where I can jump in. Thanks, -Maran On 7/25/2018 6:07 AM, Marc Zyngier wrote: > On 25/07/18 13:28, Andrew Jones wrote: >> On Wed, Jul 25, 2018 at 11:40:54AM +0100, Marc Zyngier wrote: >>> On 24/07/18 19:35, Maran Wilson wrote: >>>> It's been a few months since this email thread died off. Has anyone >>>> started working on a potential solution that would allow VCPU hotplug on >>>> KVM/ARM ? Or is this a project that is still waiting for an owner who >>>> has the time and inclination to get started? >>> This is typically a project for someone who would have this particular >>> itch to scratch, and who has a demonstrable need for this functionality. >>> >>> Work wise, it would have to include adding physical CPU hotplug support >>> to the arm64 kernel as a precondition, before worrying about doing it in >>> KVM. >>> >>> For KVM itself, particular area of interests would be: >>> - Making GICv3 redistributors magically appear in the IPA space >>> - Live resizing of GICv3 structures >>> - Dynamic allocation of MPIDR, and mapping with vcpu_id >> I have CPU topology description patches on the QEMU list now[*]. A next >> step for me is to this MPIDR work. I probably won't get to it until the >> end of August though. >> >> [*] http://lists.nongnu.org/archive/html/qemu-devel/2018-07/msg01168.html >> >>> This should keep someone busy for a good couple of weeks (give or take a >>> few months). >> :-) >> >>> That being said, I'd rather see support in QEMU first, creating all the >>> vcpu/redistributors upfront, and signalling the hotplug event via the >>> virtual firmware. And then post some numbers to show that creating all >>> the vcpus upfront is not acceptable. >> I think the upfront allocation, allocating all possible cpus, but only >> activating all present cpus, was the planned approach. What were the >> concerns about that approach? Just vcpu memory overhead for too many >> overly ambitious VM configs? > I don't have any ARM-specific concern about that, and I think this is > the right approach. It has the good property of not requiring much > change in the kernel (other than actually supporting CPU hotplug). > > vcpu memory overhead is a generic concern though, and not only for ARM. > We currently allow up to 512 vcpus per VM, which looks like a lot, but > really isn't. If we're to allow this to be bumped up significantly, we > should start accounting the vcpu-related memory against the user's > allowance... > > Thanks, > > M. From mboxrd@z Thu Jan 1 00:00:00 1970 From: Igor Mammedov Subject: Re: [Qemu-devel] VCPU hotplug on KVM/ARM Date: Tue, 31 Jul 2018 12:27:10 +0200 Message-ID: <20180731122710.142a97c4@redhat.com> References: <000e01d3afad$b9a13830$2ce3a890$@codeaurora.org> <20180227104708.GA11391@cbox> <20180227124604.GA2373@cbox> <20180227132131.fipafmnb56a7fj76@kamzik.brq.redhat.com> <74427c65-b860-d576-04f9-766253285210@arm.com> <20180725122806.g2gpvdbrbdkriprg@kamzik.brq.redhat.com> <202a2c63-1a3e-7f01-850c-4fb5e48f43e7@arm.com> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Return-path: Received: from localhost (localhost [127.0.0.1]) by mm01.cs.columbia.edu (Postfix) with ESMTP id 8C99A408AD for ; Tue, 31 Jul 2018 06:27:18 -0400 (EDT) Received: from mm01.cs.columbia.edu ([127.0.0.1]) by localhost (mm01.cs.columbia.edu [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id 48msHxnckhpa for ; Tue, 31 Jul 2018 06:27:17 -0400 (EDT) Received: from mx1.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by mm01.cs.columbia.edu (Postfix) with ESMTPS id 6304A40217 for ; Tue, 31 Jul 2018 06:27:17 -0400 (EDT) In-Reply-To: <202a2c63-1a3e-7f01-850c-4fb5e48f43e7@arm.com> List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: kvmarm-bounces@lists.cs.columbia.edu Sender: kvmarm-bounces@lists.cs.columbia.edu To: Marc Zyngier Cc: Christoffer Dall , cohuck@redhat.com, Christoffer Dall , david@redhat.com, qemu-devel@nongnu.org, borntraeger@de.ibm.com, Christoffer Dall , qemu-arm@nongnu.org, kvmarm@lists.cs.columbia.edu, David Gibson List-Id: kvmarm@lists.cs.columbia.edu On Wed, 25 Jul 2018 14:07:12 +0100 Marc Zyngier wrote: > On 25/07/18 13:28, Andrew Jones wrote: > > On Wed, Jul 25, 2018 at 11:40:54AM +0100, Marc Zyngier wrote: > >> On 24/07/18 19:35, Maran Wilson wrote: > >>> It's been a few months since this email thread died off. Has anyone > >>> started working on a potential solution that would allow VCPU hotplug on > >>> KVM/ARM ? Or is this a project that is still waiting for an owner who > >>> has the time and inclination to get started? > >> > >> This is typically a project for someone who would have this particular > >> itch to scratch, and who has a demonstrable need for this functionality. > >> > >> Work wise, it would have to include adding physical CPU hotplug support > >> to the arm64 kernel as a precondition, before worrying about doing it in > >> KVM. > >> > >> For KVM itself, particular area of interests would be: > >> - Making GICv3 redistributors magically appear in the IPA space > >> - Live resizing of GICv3 structures > >> - Dynamic allocation of MPIDR, and mapping with vcpu_id > > > > I have CPU topology description patches on the QEMU list now[*]. A next > > step for me is to this MPIDR work. I probably won't get to it until the > > end of August though. > > > > [*] http://lists.nongnu.org/archive/html/qemu-devel/2018-07/msg01168.html > > > >> > >> This should keep someone busy for a good couple of weeks (give or take a > >> few months). > > > > :-) > > > >> > >> That being said, I'd rather see support in QEMU first, creating all the > >> vcpu/redistributors upfront, and signalling the hotplug event via the > >> virtual firmware. And then post some numbers to show that creating all > >> the vcpus upfront is not acceptable. > > > > I think the upfront allocation, allocating all possible cpus, but only > > activating all present cpus, was the planned approach. What were the > > concerns about that approach? Just vcpu memory overhead for too many > > overly ambitious VM configs? > > I don't have any ARM-specific concern about that, and I think this is > the right approach. It has the good property of not requiring much > change in the kernel (other than actually supporting CPU hotplug). for x86 we allocate VCPUs dynamically (both QEMU and KVM) CCing ppc/s390 folks as I don't recall how it's implemented there. but we do not delete vcpus in KVM after they were created (as it deemed to be too complicated), we are just deleting QEMU part of it and keep kvm's vcpu for reuse with future hotplug. > vcpu memory overhead is a generic concern though, and not only for ARM. > We currently allow up to 512 vcpus per VM, which looks like a lot, but > really isn't. If we're to allow this to be bumped up significantly, we > should start accounting the vcpu-related memory against the user's > allowance... > > Thanks, > > M. From mboxrd@z Thu Jan 1 00:00:00 1970 From: David Hildenbrand Subject: Re: [Qemu-devel] VCPU hotplug on KVM/ARM Date: Tue, 31 Jul 2018 12:57:06 +0200 Message-ID: <577ab1f4-e644-8106-b380-14b8532989ee@redhat.com> References: <000e01d3afad$b9a13830$2ce3a890$@codeaurora.org> <20180227104708.GA11391@cbox> <20180227124604.GA2373@cbox> <20180227132131.fipafmnb56a7fj76@kamzik.brq.redhat.com> <74427c65-b860-d576-04f9-766253285210@arm.com> <20180725122806.g2gpvdbrbdkriprg@kamzik.brq.redhat.com> <202a2c63-1a3e-7f01-850c-4fb5e48f43e7@arm.com> <20180731122710.142a97c4@redhat.com> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Return-path: Received: from localhost (localhost [127.0.0.1]) by mm01.cs.columbia.edu (Postfix) with ESMTP id 32107408AD for ; Tue, 31 Jul 2018 06:57:18 -0400 (EDT) Received: from mm01.cs.columbia.edu ([127.0.0.1]) by localhost (mm01.cs.columbia.edu [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id 6x9jYldYh070 for ; Tue, 31 Jul 2018 06:57:17 -0400 (EDT) Received: from mx1.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by mm01.cs.columbia.edu (Postfix) with ESMTPS id 1894140217 for ; Tue, 31 Jul 2018 06:57:17 -0400 (EDT) In-Reply-To: <20180731122710.142a97c4@redhat.com> Content-Language: en-US List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: kvmarm-bounces@lists.cs.columbia.edu Sender: kvmarm-bounces@lists.cs.columbia.edu To: Igor Mammedov , Marc Zyngier Cc: Christoffer Dall , Christoffer Dall , cohuck@redhat.com, qemu-devel@nongnu.org, borntraeger@de.ibm.com, Christoffer Dall , qemu-arm@nongnu.org, kvmarm@lists.cs.columbia.edu, David Gibson List-Id: kvmarm@lists.cs.columbia.edu On 31.07.2018 12:27, Igor Mammedov wrote: > On Wed, 25 Jul 2018 14:07:12 +0100 > Marc Zyngier wrote: > >> On 25/07/18 13:28, Andrew Jones wrote: >>> On Wed, Jul 25, 2018 at 11:40:54AM +0100, Marc Zyngier wrote: >>>> On 24/07/18 19:35, Maran Wilson wrote: >>>>> It's been a few months since this email thread died off. Has anyone >>>>> started working on a potential solution that would allow VCPU hotplug on >>>>> KVM/ARM ? Or is this a project that is still waiting for an owner who >>>>> has the time and inclination to get started? >>>> >>>> This is typically a project for someone who would have this particular >>>> itch to scratch, and who has a demonstrable need for this functionality. >>>> >>>> Work wise, it would have to include adding physical CPU hotplug support >>>> to the arm64 kernel as a precondition, before worrying about doing it in >>>> KVM. >>>> >>>> For KVM itself, particular area of interests would be: >>>> - Making GICv3 redistributors magically appear in the IPA space >>>> - Live resizing of GICv3 structures >>>> - Dynamic allocation of MPIDR, and mapping with vcpu_id >>> >>> I have CPU topology description patches on the QEMU list now[*]. A next >>> step for me is to this MPIDR work. I probably won't get to it until the >>> end of August though. >>> >>> [*] http://lists.nongnu.org/archive/html/qemu-devel/2018-07/msg01168.html >>> >>>> >>>> This should keep someone busy for a good couple of weeks (give or take a >>>> few months). >>> >>> :-) >>> >>>> >>>> That being said, I'd rather see support in QEMU first, creating all the >>>> vcpu/redistributors upfront, and signalling the hotplug event via the >>>> virtual firmware. And then post some numbers to show that creating all >>>> the vcpus upfront is not acceptable. >>> >>> I think the upfront allocation, allocating all possible cpus, but only >>> activating all present cpus, was the planned approach. What were the >>> concerns about that approach? Just vcpu memory overhead for too many >>> overly ambitious VM configs? >> >> I don't have any ARM-specific concern about that, and I think this is >> the right approach. It has the good property of not requiring much >> change in the kernel (other than actually supporting CPU hotplug). > for x86 we allocate VCPUs dynamically (both QEMU and KVM) > CCing ppc/s390 folks as I don't recall how it's implemented there. s390x: also handled that way. Dynamically allocated. Unplug: not supported by the architecture and fenced. I remember a discussion where people said dynamically creating/deleting VCPUs (and therefore threads) is preferred, because then there is no way on earth a malicious guest could make use of such a CPU (in case there would be a subtle BUG in QEMU). -- Thanks, David / dhildenb From mboxrd@z Thu Jan 1 00:00:00 1970 From: Bharata B Rao Subject: Re: VCPU hotplug on KVM/ARM Date: Wed, 1 Aug 2018 13:39:24 +0530 Message-ID: References: <000e01d3afad$b9a13830$2ce3a890$@codeaurora.org> <20180227104708.GA11391@cbox> <20180227124604.GA2373@cbox> <20180227132131.fipafmnb56a7fj76@kamzik.brq.redhat.com> <74427c65-b860-d576-04f9-766253285210@arm.com> <20180725122806.g2gpvdbrbdkriprg@kamzik.brq.redhat.com> <202a2c63-1a3e-7f01-850c-4fb5e48f43e7@arm.com> <20180731122710.142a97c4@redhat.com> Mime-Version: 1.0 Content-Type: text/plain; charset="UTF-8" Return-path: In-Reply-To: <20180731122710.142a97c4@redhat.com> List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+gceq-qemu-devel2=m.gmane.org@nongnu.org Sender: "Qemu-devel" To: Igor Mammedov Cc: bthakur@codeaurora.org, Christoffer Dall , Andrew Jones , Christoffer Dall , Maran Wilson , Marc Zyngier , cohuck@redhat.com, david@redhat.com, qemu-devel@nongnu.org, borntraeger@de.ibm.com, Christoffer Dall , qemu-arm@nongnu.org, Peter Maydell , kvmarm@lists.cs.columbia.edu, David Gibson List-Id: kvmarm@lists.cs.columbia.edu On Tue, Jul 31, 2018 at 3:57 PM, Igor Mammedov wrote: > On Wed, 25 Jul 2018 14:07:12 +0100 > Marc Zyngier wrote: > > > On 25/07/18 13:28, Andrew Jones wrote: > > > On Wed, Jul 25, 2018 at 11:40:54AM +0100, Marc Zyngier wrote: > > >> On 24/07/18 19:35, Maran Wilson wrote: > > >>> It's been a few months since this email thread died off. Has anyone > > >>> started working on a potential solution that would allow VCPU > hotplug on > > >>> KVM/ARM ? Or is this a project that is still waiting for an owner > who > > >>> has the time and inclination to get started? > > >> > > >> This is typically a project for someone who would have this particular > > >> itch to scratch, and who has a demonstrable need for this > functionality. > > >> > > >> Work wise, it would have to include adding physical CPU hotplug > support > > >> to the arm64 kernel as a precondition, before worrying about doing it > in > > >> KVM. > > >> > > >> For KVM itself, particular area of interests would be: > > >> - Making GICv3 redistributors magically appear in the IPA space > > >> - Live resizing of GICv3 structures > > >> - Dynamic allocation of MPIDR, and mapping with vcpu_id > > > > > > I have CPU topology description patches on the QEMU list now[*]. A next > > > step for me is to this MPIDR work. I probably won't get to it until the > > > end of August though. > > > > > > [*] http://lists.nongnu.org/archive/html/qemu-devel/2018- > 07/msg01168.html > > > > > >> > > >> This should keep someone busy for a good couple of weeks (give or > take a > > >> few months). > > > > > > :-) > > > > > >> > > >> That being said, I'd rather see support in QEMU first, creating all > the > > >> vcpu/redistributors upfront, and signalling the hotplug event via the > > >> virtual firmware. And then post some numbers to show that creating all > > >> the vcpus upfront is not acceptable. > > > > > > I think the upfront allocation, allocating all possible cpus, but only > > > activating all present cpus, was the planned approach. What were the > > > concerns about that approach? Just vcpu memory overhead for too many > > > overly ambitious VM configs? > > > > I don't have any ARM-specific concern about that, and I think this is > > the right approach. It has the good property of not requiring much > > change in the kernel (other than actually supporting CPU hotplug). > for x86 we allocate VCPUs dynamically (both QEMU and KVM) > CCing ppc/s390 folks as I don't recall how it's implemented there. > > but we do not delete vcpus in KVM after they were created > (as it deemed to be too complicated), we are just deleting QEMU part > of it and keep kvm's vcpu for reuse with future hotplug. > Same with PPC, we too dynamically create vcpus and during unplug keep the KVM's vcpus for reuse. Regards, Bharata.