kvm.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Christian Borntraeger <borntraeger@de.ibm.com>
To: Alexander Graf <agraf@suse.de>, Thomas Huth <thuth@redhat.com>,
	kvm-ppc@vger.kernel.org, qemu-ppc@nongnu.org,
	kvm@vger.kernel.org
Cc: David Gibson <dgibson@redhat.com>,
	Alex Williamson <alex.williamson@redhat.com>,
	Bharata B Rao <bharata@linux.vnet.ibm.com>
Subject: Re: [Qemu-ppc] KVM memory slots limit on powerpc
Date: Fri, 04 Sep 2015 12:07:39 +0200	[thread overview]
Message-ID: <55E96D6B.2070201@de.ibm.com> (raw)
In-Reply-To: <55E96CB9.4070100@suse.de>

Am 04.09.2015 um 12:04 schrieb Alexander Graf:
> 
> 
> On 04.09.15 11:59, Christian Borntraeger wrote:
>> Am 04.09.2015 um 11:35 schrieb Thomas Huth:
>>>
>>>  Hi all,
>>>
>>> now that we get memory hotplugging for the spapr machine on qemu-ppc,
>>> too, it seems like we easily can hit the amount of KVM-internal memory
>>> slots now ("#define KVM_USER_MEM_SLOTS 32" in
>>> arch/powerpc/include/asm/kvm_host.h). For example, start
>>> qemu-system-ppc64 with a couple of "-device secondary-vga" and "-m
>>> 4G,slots=32,maxmem=40G" and then try to hot-plug all 32 DIMMs ... and
>>> you'll see that it aborts way earlier already.
>>>
>>> The x86 code already increased the amount of KVM_USER_MEM_SLOTS to 509
>>> already (+3 internal slots = 512) ... maybe we should now increase the
>>> amount of slots on powerpc, too? Since we don't use internal slots on
>>> POWER, would 512 be a good value? Or would less be sufficient, too?
>>
>> When you are at it, the s390 value should also be increased I guess.
> 
> That constant defines the array size for the memslot array in struct kvm
> which in turn again gets allocated by kzalloc, so it's pinned kernel
> memory that is physically contiguous. Doing big allocations can turn
> into problems during runtime.
> 
> So maybe there is another way? Can we extend the memslot array size
> dynamically somehow? Allocate it separately? How much memory does the
> memslot array use up with 512 entries?

Maybe some rcu protected scheme that doubles the amount of memslots for
each overrun? Yes, that would be good and even reduce the footprint for
systems with only a small number of memslots.

Christian


  reply	other threads:[~2015-09-04 10:07 UTC|newest]

Thread overview: 12+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2015-09-04  9:35 KVM memory slots limit on powerpc Thomas Huth
2015-09-04  9:59 ` Christian Borntraeger
2015-09-04 10:04   ` [Qemu-ppc] " Alexander Graf
2015-09-04 10:07     ` Christian Borntraeger [this message]
2015-09-04 10:28       ` Thomas Huth
2015-09-04 10:40         ` Benjamin Herrenschmidt
2015-09-04 14:22         ` Alex Williamson
2015-09-04 14:45     ` Thomas Huth
2015-09-07 14:31     ` Igor Mammedov
2015-09-08  6:05       ` Thomas Huth
2015-09-08  7:11         ` Christian Borntraeger
2015-09-08  9:22           ` Thomas Huth

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=55E96D6B.2070201@de.ibm.com \
    --to=borntraeger@de.ibm.com \
    --cc=agraf@suse.de \
    --cc=alex.williamson@redhat.com \
    --cc=bharata@linux.vnet.ibm.com \
    --cc=dgibson@redhat.com \
    --cc=kvm-ppc@vger.kernel.org \
    --cc=kvm@vger.kernel.org \
    --cc=qemu-ppc@nongnu.org \
    --cc=thuth@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).