From: Ralf Spenneberg <software@opensource-security.de>
To: kvm@vger.kernel.org
Subject: Re: KVM and NUMA
Date: Fri, 16 Jul 2010 08:35:26 +0200 [thread overview]
Message-ID: <1279262126.2221.12.camel@localhost> (raw)
In-Reply-To: <20100715193124.GA24837@redhat.com>
Hi Daniel,
thanks for your response.
Am Donnerstag, den 15.07.2010, 20:31 +0100 schrieb Daniel P. Berrange:
> If numactl --hardware works, then libvirt should work,
> since libvirt uses the numactl library to query topology
Ok. I did not know that, and in my case it does not seem to work. See
below.
> The NUMA topology does not get put inside the <cpu> element. It
> is one level up in a <topology> element. eg
>
In my case (Ubuntu 10.04 LTS) it is just put inside the cpu element.
Full host listing:
<capabilities>
<host>
<cpu>
<arch>x86_64</arch>
<model>core2duo</model>
<topology sockets='2' cores='4' threads='1'/>
<feature name='lahf_lm'/>
<feature name='rdtscp'/>
<feature name='popcnt'/>
<feature name='dca'/>
<feature name='xtpr'/>
<feature name='cx16'/>
<feature name='tm2'/>
<feature name='est'/>
<feature name='vmx'/>
<feature name='ds_cpl'/>
<feature name='pbe'/>
<feature name='tm'/>
<feature name='ht'/>
<feature name='ss'/>
<feature name='acpi'/>
<feature name='ds'/>
</cpu>
<migration_features>
<live/>
<uri_transports>
<uri_transport>tcp</uri_transport>
</uri_transports>
</migration_features>
<secmodel>
<model>apparmor</model>
<doi>0</doi>
</secmodel>
</host>
> > I guess this is the fact, because QEMU does not recognize the
> > NUMA-Architecture (QEMU-Monitor):
> > (qemu) info numa
> > 0 nodes
Thanks for the clarification.
> There are two aspects to NUMA. 1. Placing QEMU on appropriate NUMA
> ndes. 2. defining guest NUMA topology
Right. I am interested in placing Qemu on the appropriate node.
>
> By default QEMU will float freely across any CPUs and all the guest
> RAM will appear in one node. This is can be bad for performance,
> especially if you are benchmarking
> So for performance testing you definitely want to bind QEMU to the
> CPUs within a single NUMA node at startup, this will mean that all
> memory accesses are local to the node. Unless you give the guest
> more virtual RAM, than there is free RAM on the local NUMA node.
> Since you suggest you're using libvirt, the low level way todo
> this is in the guest XML at the <vcpu> element
Ok. But will my Qemu implementation use the appropriate RAM since it
does not recognize the architecture?
> For further performance you also really want to enable hugepages on
> your host (eg mount hugetlbfs at /dev/hugepages), then restart
> libvirtd daemon, and then add the following to your guest XML just
> after the <memory> element:
>
> <memoryBacking>
> <hugepages/>
> </memoryBacking>
I have played with that, too. I could mount the hugetlbfs filesystem and
define the mountpoint in libvirt. The guest started ok but I could
verify that it was actually used. /proc/meminfo always showed 100% free
huge pages whether the guest was running or not. Shouldn't these pages
be used when the guest is running?
As I said: Ubuntu not RHEL.
Kind regards,
Ralf
prev parent reply other threads:[~2010-07-16 6:35 UTC|newest]
Thread overview: 3+ messages / expand[flat|nested] mbox.gz Atom feed top
2010-07-15 17:10 KVM and NUMA Ralf Spenneberg
2010-07-15 19:31 ` Daniel P. Berrange
2010-07-16 6:35 ` Ralf Spenneberg [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1279262126.2221.12.camel@localhost \
--to=software@opensource-security.de \
--cc=kvm@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox