From: Anthony Liguori <anthony@codemonkey.ws>
To: Andre Przywara <andre.przywara@amd.com>
Cc: qemu-devel@nongnu.org, Avi Kivity <avi@redhat.com>
Subject: [Qemu-devel] Re: [PATCH 7/8] v2: add numa monitor command
Date: Tue, 16 Dec 2008 15:21:38 -0600 [thread overview]
Message-ID: <49481BE2.8080205@codemonkey.ws> (raw)
In-Reply-To: <4947B91C.4050403@amd.com>
Andre Przywara wrote:
> Signed-off-by: Andre Przywara <andre.przywara@amd.com>
>
> # HG changeset patch
> # User Andre Przywara <andre.przywara@amd.com>
> # Date 1229435568 -3600
> # Node ID 4f0a8ac2d88ffffc1dcd82785c1620553baa86da
> # Parent 5a74bd76931b79713803795b3943aa8383946521
> add -numa monitor command to repin guest nodes
>
> diff -r 5a74bd76931b -r 4f0a8ac2d88f monitor.c
> --- a/monitor.c Tue Dec 16 14:51:24 2008 +0100
> +++ b/monitor.c Tue Dec 16 14:52:48 2008 +0100
> @@ -39,6 +39,9 @@
> #include "qemu-timer.h"
> #include "migration.h"
> #include "kvm.h"
> +#ifdef CONFIG_NUMA
> +#include <numa.h>
> +#endif
>
> //#define DEBUG
> //#define DEBUG_COMPLETION
> @@ -1285,6 +1288,9 @@ static void do_info_numa(void)
> {
> int i, j;
>
> +#ifndef CONFIG_NUMA
> + term_printf("NUMA placement support: not compiled\n");
> +#endif
> term_printf("%d nodes\n", numnumanodes);
> for (i = 0; i < numnumanodes; i++) {
> term_printf("node %d cpus:", i);
> @@ -1292,7 +1298,40 @@ static void do_info_numa(void)
> if (node_to_cpus[i] & (1ULL << j)) term_printf(" %d", j);
> term_printf("\n");
> term_printf("node %d size: %" PRId64 " MB\n", i, node_mem[i] >> 20);
> - }
> + term_printf("node %d host: ", i);
> + if (hostnodes[i] == (uint64_t)-1) term_printf("*\n"); else
> + term_printf("%" PRId64 "\n", hostnodes[i]);
> + }
> +}
> +
> +static void do_numa(const char *affinity)
> +{
> +#ifdef CONFIG_NUMA
> + uint64_t newaff[MAX_NODES];
> + int i;
> + unsigned long offset = 0;
> +
> + if (numnumanodes <= 0) term_printf("No NUMA nodes defined.\n"); else
> + if (numa_available() == -1) term_printf("Not a NUMA host.\n"); else
> + {
>
This should be reformatted.
> + for (i = 0; i < numnumanodes; i++) newaff[i] = hostnodes[i];
>
This should be a separate line.
> + parse_numa_args(affinity, newaff, NULL, NULL, MAX_NODES, 0);
> + for (i = 0; i < numnumanodes; i++) {
> + if (newaff[i] != hostnodes[i]) {
> + hostnodes[i] = newaff[i];
> + if (hostnodes[i] == (uint64_t)-1)
>
I don't think using u64's for cpu=>node mappings is a reasonable thing
to do at this stage. 64 processors systems are relatively common these
days.
> + numa_tonodemask_memory (phys_ram_base + offset,
> + node_mem[i], &numa_all_nodes);
> + else
> + numa_tonode_memory(phys_ram_base + offset,
> + node_mem[i], hostnodes[i] % (numa_max_node() + 1));
> + }
> + offset += node_mem[i];
> + }
> + }
> +#else
> + term_printf ("NUMA host affinity support not compiled in\n");
> +#endif
> }
Regards,
Anthony Liguori
prev parent reply other threads:[~2008-12-16 21:22 UTC|newest]
Thread overview: 2+ messages / expand[flat|nested] mbox.gz Atom feed top
2008-12-16 14:20 [Qemu-devel] [PATCH 7/8] v2: add numa monitor command Andre Przywara
2008-12-16 21:21 ` Anthony Liguori [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=49481BE2.8080205@codemonkey.ws \
--to=anthony@codemonkey.ws \
--cc=andre.przywara@amd.com \
--cc=avi@redhat.com \
--cc=qemu-devel@nongnu.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).