From mboxrd@z Thu Jan 1 00:00:00 1970 From: "Jan Beulich" Subject: Ping: c/s 20526 (tools: avoid cpu over-commitment if numa=on) Date: Wed, 13 Jan 2010 08:15:08 +0000 Message-ID: <4B4D8F1C02000078000299EC@vpn.id2.novell.com> Mime-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: quoted-printable Return-path: Content-Disposition: inline List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Sender: xen-devel-bounces@lists.xensource.com Errors-To: xen-devel-bounces@lists.xensource.com To: andre.przywara@amd.com Cc: xen-devel@lists.xensource.com List-Id: xen-devel@lists.xenproject.org Andre, I'm afraid this change isn't really correct: >+ cores_per_node =3D info['nr_cpus'] / info['nr_nodes'] >+ nodes_required =3D (self.info['VCPUs_max'] + cores_per_no= de - 1) / cores_per_node Simply using cores_per_node (as calculated here) as a divisor is bound to cause division-by-zero issues, namely when limiting the number of CPUs on the Xen command line (maxcpus=3D). I'm not sure though, what a reasonable solution to this might look like, since cores-per-node is a meaningless thing in an artificial setup like this, and may also be meaningless in asymmetric configurations. So perhaps we really need to iterate over nodes while summing up the number of CPUs they have until the number of needed vCPU-s was reached. Jan