From: David Gibson <dgibson@redhat.com>
To: Peter Krempa <pkrempa@redhat.com>
Cc: qemu-devel@nongnu.org, imammedo@redhat.com
Subject: Re: [Qemu-devel] [RFC PATCH 2/2] numa: Add node_id data in query-hotpluggable-cpus
Date: Tue, 12 Jul 2016 13:27:41 +1000 [thread overview]
Message-ID: <20160712132741.59840413@voom.fritz.box> (raw)
In-Reply-To: <20160708074600.GB78006@andariel.pipo.sk>
[-- Attachment #1: Type: text/plain, Size: 3494 bytes --]
On Fri, 8 Jul 2016 09:46:00 +0200
Peter Krempa <pkrempa@redhat.com> wrote:
> On Fri, Jul 08, 2016 at 12:23:08 +1000, David Gibson wrote:
> > On Thu, 7 Jul 2016 17:17:14 +0200
> > Peter Krempa <pkrempa@redhat.com> wrote:
> >
> > > Add a helper that looks up the NUMA node for a given CPU and use it to
> > > fill the node_id in the PPC and X86 impls of query-hotpluggable-cpus.
> >
> >
> > IIUC how the query thing works this means that the node id issued by
> > query-hotpluggable-cpus will be echoed back to device add by libvirt.
>
> It will be echoed back, but the problem is that it's configured
> separately ...
>
> > I'm not sure we actually process that information in the core at
> > present, so I don't know that that's right.
> >
> > We need to be clear on which direction information is flowing here.
> > Does query-hotpluggable-cpus *define* the NUMA node allocation which is
> > then passed to the core device which implements it. Or is the NUMA
> > allocation defined elsewhere, and query-hotpluggable-cpus just reports
> > it.
>
> Currently (in the pre-hotplug era) the NUMA topology is defined by
> specifying a CPU numbers (see previous patch) on the commandline:
>
> -numa node=1,cpus=1-5,cpus=8,cpus=11...
>
> This is then reported to the guest.
>
> For a machine started with:
>
> -smp 5,maxcpus=8,sockets=2,cores=2,threads=2
> -numa node,nodeid=0,cpus=0,cpus=2,cpus=4,cpus=6,mem=500
> -numa node,nodeid=1,cpus=1,cpus=3,cpus=5,cpus=7,mem=500
>
> you get the following topology that is not really possible with
> hardware:
>
> # lscpu
> Architecture: x86_64
> CPU op-mode(s): 32-bit, 64-bit
> Byte Order: Little Endian
> CPU(s): 5
> On-line CPU(s) list: 0-4
> Thread(s) per core: 1
> Core(s) per socket: 2
> Socket(s): 2
> NUMA node(s): 2
> Vendor ID: GenuineIntel
> CPU family: 6
> Model: 6
> Model name: QEMU Virtual CPU version 2.5+
> Stepping: 3
> CPU MHz: 3504.318
> BogoMIPS: 7008.63
> Hypervisor vendor: KVM
> Virtualization type: full
> L1d cache: 32K
> L1i cache: 32K
> L2 cache: 4096K
> NUMA node0 CPU(s): 0,2,4
> NUMA node1 CPU(s): 1,3
>
> Note that the count of cpus per numa node does not need to be identical.
>
> As of the above 'query-hotpluggable-cpus' will need to report the data
> that was configured above even if it doesn't make much sense in a real
> world.
>
> I did not try the above on a PPC host and thus I'm not sure whether the
> config above is allowed or not.
It's not - although I'm not sure that we actually have something
enforcing this.
However, single cores *must* be in the same NUMA node - there's no way
of reporting to the guest anything finer grained.
> While for the hotplug cpus it would be possible to plug in the correct
> one according to the requested use the queried data but with the current
> approach it's impossible to set the initial vcpus differently.
>
> Peter
>
> Note: For libvirt it's a no-go to start a throwaway qemu process just to
> query the information and thus it's desired to have a way to configure
> all this without the need to query with a specific machine
> type/topology.
--
David Gibson <dgibson@redhat.com>
Senior Software Engineer, Virtualization, Red Hat
[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 819 bytes --]
next prev parent reply other threads:[~2016-07-12 3:26 UTC|newest]
Thread overview: 18+ messages / expand[flat|nested] mbox.gz Atom feed top
2016-07-07 15:17 [Qemu-devel] [RFC PATCH 0/2] cpu hotplug: Extend data provided by query-hotpluggable-cpus Peter Krempa
2016-07-07 15:17 ` [Qemu-devel] [RFC PATCH 1/2] qapi: Add vcpu id to query-hotpluggable-cpus output Peter Krempa
2016-07-08 2:18 ` David Gibson
2016-07-08 11:40 ` Igor Mammedov
2016-07-11 3:30 ` David Gibson
2016-07-11 8:23 ` Igor Mammedov
2016-07-07 15:17 ` [Qemu-devel] [RFC PATCH 2/2] numa: Add node_id data in query-hotpluggable-cpus Peter Krempa
2016-07-07 16:10 ` Andrew Jones
2016-07-08 2:23 ` David Gibson
2016-07-08 7:46 ` Peter Krempa
2016-07-08 12:06 ` Igor Mammedov
2016-07-08 12:26 ` Peter Krempa
2016-07-12 3:27 ` David Gibson [this message]
2016-07-08 11:54 ` Igor Mammedov
2016-07-08 12:04 ` Peter Krempa
2016-07-08 12:10 ` Igor Mammedov
2016-07-08 12:53 ` Peter Krempa
2016-07-07 15:24 ` [Qemu-devel] [RFC PATCH 0/2] cpu hotplug: Extend data provided by query-hotpluggable-cpus Peter Krempa
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20160712132741.59840413@voom.fritz.box \
--to=dgibson@redhat.com \
--cc=imammedo@redhat.com \
--cc=pkrempa@redhat.com \
--cc=qemu-devel@nongnu.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).