linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: "Radim Krčmář" <rkrcmar@redhat.com>
To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org
Cc: Christoffer Dall <cdall@linaro.org>,
	Marc Zyngier <marc.zyngier@arm.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Christian Borntraeger <borntraeger@de.ibm.com>,
	Cornelia Huck <cornelia.huck@de.ibm.com>,
	James Hogan <james.hogan@imgtec.com>,
	Paul Mackerras <paulus@ozlabs.org>,
	Alexander Graf <agraf@suse.com>
Subject: [PATCH 2/4] KVM: allocate kvm->vcpus separately
Date: Thu, 13 Apr 2017 22:19:49 +0200	[thread overview]
Message-ID: <20170413201951.11939-3-rkrcmar@redhat.com> (raw)
In-Reply-To: <20170413201951.11939-1-rkrcmar@redhat.com>

The maximal number of VCPUs is going to be high, but most VMs are still
going to use just a few.  We want to save memory and there are two main
conservative possibilities:
  1) turn vcpus into a pointer and allocate separately
  2) turn vcpus into variable length array at the end of struct kvm

This patch does (1) as it is slightly safer, (2) would avoid one level
of indirection and is a nice follow up.

The vcpus array going to be dynamic and might take several pages, which
is why it is allocated with kvm_kvzalloc().

Generic users of KVM_MAX_VCPUS are switched to kvm->max_vcpus as the
array size is going to change.

Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
---
 include/linux/kvm_host.h |  8 ++++++--
 virt/kvm/kvm_main.c      | 23 +++++++++++++++++++----
 2 files changed, 25 insertions(+), 6 deletions(-)

diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
index ae4e114cb7d1..6ba7bc831094 100644
--- a/include/linux/kvm_host.h
+++ b/include/linux/kvm_host.h
@@ -377,16 +377,20 @@ struct kvm {
 	struct kvm_memslots *memslots[KVM_ADDRESS_SPACE_NUM];
 	struct srcu_struct srcu;
 	struct srcu_struct irq_srcu;
-	struct kvm_vcpu *vcpus[KVM_MAX_VCPUS];
+	struct kvm_vcpu **vcpus;
 
 	/*
 	 * created_vcpus is protected by kvm->lock, and is incremented
 	 * at the beginning of KVM_CREATE_VCPU.  online_vcpus is only
 	 * incremented after storing the kvm_vcpu pointer in vcpus,
 	 * and is accessed atomically.
+	 * max_vcpus is the size of vcpus array and can be changed only before
+	 * any vcpu is created.  Updates to max_vcpus are protected by
+	 * kvm->lock.
 	 */
 	atomic_t online_vcpus;
 	int created_vcpus;
+	int max_vcpus;
 	int last_boosted_vcpu;
 	struct list_head vm_list;
 	struct mutex lock;
@@ -480,7 +484,7 @@ static inline struct kvm_vcpu *kvm_get_vcpu_by_id(struct kvm *kvm, int id)
 
 	if (id < 0)
 		return NULL;
-	if (id < KVM_MAX_VCPUS)
+	if (id < kvm->max_vcpus)
 		vcpu = kvm_get_vcpu(kvm, id);
 	if (vcpu && vcpu->vcpu_id == id)
 		return vcpu;
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index f03b093abffe..0f1579f118b4 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -604,20 +604,35 @@ static int kvm_create_vm_debugfs(struct kvm *kvm, int fd)
 	return 0;
 }
 
-static inline struct kvm *kvm_alloc_vm(void)
+static inline struct kvm *kvm_alloc_vm(size_t max_vcpus)
 {
-	return kzalloc(sizeof(struct kvm), GFP_KERNEL);
+	struct kvm *kvm;
+
+	kvm = kzalloc(sizeof(struct kvm), GFP_KERNEL);
+	if (!kvm)
+		return NULL;
+
+	kvm->vcpus = kvm_kvzalloc(sizeof(*kvm->vcpus) * max_vcpus);
+	if (!kvm->vcpus) {
+		kfree(kvm);
+		return NULL;
+	}
+	kvm->max_vcpus = max_vcpus;
+
+	return kvm;
 }
 
 static inline void kvm_free_vm(struct kvm *kvm)
 {
+	if (kvm)
+		kvfree(kvm->vcpus);
 	kfree(kvm);
 }
 
 static struct kvm *kvm_create_vm(unsigned long type)
 {
 	int r, i;
-	struct kvm *kvm = kvm_alloc_vm();
+	struct kvm *kvm = kvm_alloc_vm(KVM_MAX_VCPUS);
 
 	if (!kvm)
 		return ERR_PTR(-ENOMEM);
@@ -2445,7 +2460,7 @@ static int kvm_vm_ioctl_create_vcpu(struct kvm *kvm, u32 id)
 		return -EINVAL;
 
 	mutex_lock(&kvm->lock);
-	if (kvm->created_vcpus == KVM_MAX_VCPUS) {
+	if (kvm->created_vcpus == kvm->max_vcpus) {
 		mutex_unlock(&kvm->lock);
 		return -EINVAL;
 	}
-- 
2.12.0

  parent reply	other threads:[~2017-04-13 20:21 UTC|newest]

Thread overview: 16+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-04-13 20:19 [PATCH 0/4] KVM: add KVM_CREATE_VM2 to allow dynamic kvm->vcpus array Radim Krčmář
2017-04-13 20:19 ` [PATCH 1/4] KVM: remove unused __KVM_HAVE_ARCH_VM_ALLOC Radim Krčmář
2017-04-18 10:50   ` David Hildenbrand
2017-04-13 20:19 ` Radim Krčmář [this message]
2017-04-13 20:19 ` [PATCH 3/4] KVM: add KVM_CREATE_VM2 system ioctl Radim Krčmář
2017-04-18 14:16   ` Paolo Bonzini
2017-04-18 14:30     ` Paolo Bonzini
2017-04-24 16:22       ` Radim Krčmář
2017-04-24 20:22         ` Radim Krčmář
2017-04-13 20:19 ` [PATCH 4/4] KVM: x86: enable configurable MAX_VCPU Radim Krčmář
2017-04-19  8:08   ` Christian Borntraeger
2017-04-24 17:00     ` Radim Krčmář
2017-04-18 11:11 ` [PATCH 0/4] KVM: add KVM_CREATE_VM2 to allow dynamic kvm->vcpus array David Hildenbrand
2017-04-18 12:29   ` Cornelia Huck
2017-04-24 20:03     ` Radim Krčmář
2017-04-24 17:03   ` Radim Krčmář

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20170413201951.11939-3-rkrcmar@redhat.com \
    --to=rkrcmar@redhat.com \
    --cc=agraf@suse.com \
    --cc=borntraeger@de.ibm.com \
    --cc=cdall@linaro.org \
    --cc=cornelia.huck@de.ibm.com \
    --cc=james.hogan@imgtec.com \
    --cc=kvm@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=marc.zyngier@arm.com \
    --cc=paulus@ozlabs.org \
    --cc=pbonzini@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).