From mboxrd@z Thu Jan 1 00:00:00 1970 From: Ralf Spenneberg Subject: Re: KVM and NUMA Date: Fri, 16 Jul 2010 08:35:26 +0200 Message-ID: <1279262126.2221.12.camel@localhost> References: <1279213835.21655.77.camel@localhost> <20100715193124.GA24837@redhat.com> Mime-Version: 1.0 Content-Type: text/plain; charset="ISO-8859-15" Content-Transfer-Encoding: 7bit To: kvm@vger.kernel.org Return-path: Received: from mail2.spenneberg.net ([87.106.54.221]:40131 "EHLO mail2.spenneberg.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S935636Ab0GPGfb (ORCPT ); Fri, 16 Jul 2010 02:35:31 -0400 Received: from [192.168.255.105] (p4FC44289.dip.t-dialin.net [79.196.66.137]) (Authenticated sender: ralf@mail2.spenneberg.net) by mail2.spenneberg.net (Postfix) with ESMTPA id 12E2540C146 for ; Fri, 16 Jul 2010 06:35:27 +0000 (UTC) In-Reply-To: <20100715193124.GA24837@redhat.com> Sender: kvm-owner@vger.kernel.org List-ID: Hi Daniel, thanks for your response. Am Donnerstag, den 15.07.2010, 20:31 +0100 schrieb Daniel P. Berrange: > If numactl --hardware works, then libvirt should work, > since libvirt uses the numactl library to query topology Ok. I did not know that, and in my case it does not seem to work. See below. > The NUMA topology does not get put inside the element. It > is one level up in a element. eg > In my case (Ubuntu 10.04 LTS) it is just put inside the cpu element. Full host listing: x86_64 core2duo tcp apparmor 0 > > I guess this is the fact, because QEMU does not recognize the > > NUMA-Architecture (QEMU-Monitor): > > (qemu) info numa > > 0 nodes Thanks for the clarification. > There are two aspects to NUMA. 1. Placing QEMU on appropriate NUMA > ndes. 2. defining guest NUMA topology Right. I am interested in placing Qemu on the appropriate node. > > By default QEMU will float freely across any CPUs and all the guest > RAM will appear in one node. This is can be bad for performance, > especially if you are benchmarking > So for performance testing you definitely want to bind QEMU to the > CPUs within a single NUMA node at startup, this will mean that all > memory accesses are local to the node. Unless you give the guest > more virtual RAM, than there is free RAM on the local NUMA node. > Since you suggest you're using libvirt, the low level way todo > this is in the guest XML at the element Ok. But will my Qemu implementation use the appropriate RAM since it does not recognize the architecture? > For further performance you also really want to enable hugepages on > your host (eg mount hugetlbfs at /dev/hugepages), then restart > libvirtd daemon, and then add the following to your guest XML just > after the element: > > > > I have played with that, too. I could mount the hugetlbfs filesystem and define the mountpoint in libvirt. The guest started ok but I could verify that it was actually used. /proc/meminfo always showed 100% free huge pages whether the guest was running or not. Shouldn't these pages be used when the guest is running? As I said: Ubuntu not RHEL. Kind regards, Ralf