From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:39787) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1UpuzH-000184-JC for qemu-devel@nongnu.org; Fri, 21 Jun 2013 02:38:25 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1UpuzG-0004L2-KY for qemu-devel@nongnu.org; Fri, 21 Jun 2013 02:38:23 -0400 Received: from [222.73.24.84] (port=33122 helo=song.cn.fujitsu.com) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1UpuxQ-0000kp-FD for qemu-devel@nongnu.org; Fri, 21 Jun 2013 02:38:22 -0400 From: Wanlong Gao Date: Fri, 21 Jun 2013 14:26:00 +0800 Message-Id: <1371795960-10478-10-git-send-email-gaowanlong@cn.fujitsu.com> In-Reply-To: <1371795960-10478-1-git-send-email-gaowanlong@cn.fujitsu.com> References: <1371795960-10478-1-git-send-email-gaowanlong@cn.fujitsu.com> Subject: [Qemu-devel] [PATCH V2 9/9] NUMA: show host memory policy info in info numa command List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: qemu-devel@nongnu.org Cc: aliguori@us.ibm.com, ehabkost@redhat.com, bsd@redhat.com, pbonzini@redhat.com, y-goto@jp.fujitsu.com, afaerber@suse.de, gaowanlong@cn.fujitsu.com Show host memory policy of nodes in the info numa monitor command. After this patch, the monitor command "info numa" will show the information like following if the host numa support is enabled: (qemu) info numa 2 nodes node 0 cpus: 0 node 0 size: 1024 MB node 0 mempolicy: membind=0,1 node 1 cpus: 1 node 1 size: 1024 MB node 1 mempolicy: interleave=1 Signed-off-by: Wanlong Gao --- monitor.c | 42 ++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 42 insertions(+) diff --git a/monitor.c b/monitor.c index 61dbebb..b6e93e5 100644 --- a/monitor.c +++ b/monitor.c @@ -74,6 +74,11 @@ #endif #include "hw/lm32/lm32_pic.h" +#ifdef CONFIG_NUMA +#include +#include +#endif + //#define DEBUG //#define DEBUG_COMPLETION @@ -1807,6 +1812,7 @@ static void do_info_numa(Monitor *mon, const QDict *qdict) int i; CPUArchState *env; CPUState *cpu; + unsigned long first, next; monitor_printf(mon, "%d nodes\n", nb_numa_nodes); for (i = 0; i < nb_numa_nodes; i++) { @@ -1820,6 +1826,42 @@ static void do_info_numa(Monitor *mon, const QDict *qdict) monitor_printf(mon, "\n"); monitor_printf(mon, "node %d size: %" PRId64 " MB\n", i, numa_info[i].node_mem >> 20); + +#ifdef CONFIG_NUMA + monitor_printf(mon, "node %d mempolicy: ", i); + switch (numa_info[i].flags & NODE_HOST_POLICY_MASK) { + case NODE_HOST_BIND: + monitor_printf(mon, "membind="); + break; + case NODE_HOST_INTERLEAVE: + monitor_printf(mon, "interleave="); + break; + case NODE_HOST_PREFERRED: + monitor_printf(mon, "preferred="); + break; + default: + monitor_printf(mon, "default\n"); + continue; + } + + if (numa_info[i].flags & NODE_HOST_RELATIVE) + monitor_printf(mon, "+"); + + next = first = find_first_bit(numa_info[i].host_mem, MAX_CPUMASK_BITS); + monitor_printf(mon, "%lu", first); + do { + if (next == numa_max_node()) + break; + next = find_next_bit(numa_info[i].host_mem, MAX_CPUMASK_BITS, + next + 1); + if (next > numa_max_node() || next == MAX_CPUMASK_BITS) + break; + + monitor_printf(mon, ",%lu", next); + } while (true); + + monitor_printf(mon, "\n"); +#endif } } -- 1.8.3.1.448.gfb7dfaa