From: Dulloor <dulloor@gmail.com>
To: Keir Fraser <keir.fraser@eu.citrix.com>
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
Jan Beulich <JBeulich@novell.com>
Subject: Re: [PATCH] libxc bitmap utils and vcpu-affinity
Date: Tue, 23 Mar 2010 12:40:18 -0400 [thread overview]
Message-ID: <940bcfd21003230940w7c7d87eas7e1300cfdb633c4c@mail.gmail.com> (raw)
In-Reply-To: <C7CE50F7.E2C9%keir.fraser@eu.citrix.com>
[-- Attachment #1: Type: text/plain, Size: 1091 bytes --]
Fine, I agree with you both. Attached is a patch adding utils for
xenctl_bitmap (to libxc) and using the same in vcpu_(get|set)affinity.
For the guest-numa interface, I will see if I can use xenctl_cpumap.
-dulloor
On Tue, Mar 23, 2010 at 7:05 AM, Keir Fraser <keir.fraser@eu.citrix.com> wrote:
> On 23/03/2010 10:10, "Jan Beulich" <JBeulich@novell.com> wrote:
>
>>>>> Dulloor <dulloor@gmail.com> 22.03.10 18:44 >>>
>>> Motivation for using xenctl_cpumask in Xen interfaces :
>>> - xenctl_cpumap is just 4 bytes smaller than static xenctl_cpumask for
>>> 128 cpus (128 would be good for quite some time). However, the new
>>
>> I don't buy this (we're already building for 256 CPUs, looking forward
>> to further bump this in the not too distant future), and I'm generally
>> opposed to introducing hard coded limits in a public interface.
>
> We should use xenctl_cpumask everywhere for specifying physical CPU bitmaps,
> even into guest NUMA interfaces if appropriate. I don't really care if it is
> a bit harder to use than a static bitmap.
>
> -- Keir
>
>
>
[-- Attachment #2: cpumap-utils.patch --]
[-- Type: text/x-patch, Size: 29195 bytes --]
diff -r 04cb0829d138 tools/libxc/Makefile
--- a/tools/libxc/Makefile Wed Mar 17 14:10:43 2010 +0000
+++ b/tools/libxc/Makefile Tue Mar 23 12:29:26 2010 -0400
@@ -25,6 +25,7 @@
CTRL_SRCS-y += xc_mem_event.c
CTRL_SRCS-y += xc_mem_paging.c
CTRL_SRCS-y += xc_memshr.c
+CTRL_SRCS-y += xc_bitmap.c
CTRL_SRCS-$(CONFIG_X86) += xc_pagetab.c
CTRL_SRCS-$(CONFIG_Linux) += xc_linux.c
CTRL_SRCS-$(CONFIG_SunOS) += xc_solaris.c
diff -r 04cb0829d138 tools/libxc/xc_bitmap.c
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/tools/libxc/xc_bitmap.c Tue Mar 23 12:29:26 2010 -0400
@@ -0,0 +1,250 @@
+#include "xc_bitmap.h"
+#include <stdio.h>
+
+/*
+ * xc_bitmap_find_next_bit is adapted from the definition of generic
+ * find_next_bit * in Linux, with following copyright.
+ * Copyright (C) 2004 Red Hat, Inc. All Rights Reserved.
+ * Written by David Howells (dhowells@redhat.com)
+ *
+ * Adapted for byte-based bitmap by Dulloor (dulloor@gatech.edu)
+ */
+
+/**
+ * __ffs - find first bit in byte.
+ * @byte: The byte to search
+ *
+ * Undefined if no bit exists, so code should check against 0 first.
+ */
+static inline int __xc_ffs(uint8_t byte)
+{
+ int num = 0;
+
+ if ((byte & 0xff) == 0) {
+ num += 8;
+ byte >>= 8;
+ }
+ if ((byte & 0xf) == 0) {
+ num += 4;
+ byte >>= 4;
+ }
+ if ((byte & 0x3) == 0) {
+ num += 2;
+ byte >>= 2;
+ }
+ if ((byte & 0x1) == 0)
+ num += 1;
+ return num;
+}
+
+int
+xc_bitmap_find_next_bit( const uint8_t *addr, uint32_t size, uint32_t offset)
+{
+ const uint8_t *p;
+ uint32_t result;
+ uint8_t tmp;
+
+ if (offset >= size)
+ return size;
+
+ p = addr + XC_BITMAP_BYTE(offset);
+ result = offset & ~(XC_BITS_PER_BYTE-1);
+
+ size -= result;
+ offset %= XC_BITS_PER_BYTE;
+ if (offset) {
+ tmp = *(p++);
+ tmp &= (0xff << offset);
+ if (size < XC_BITS_PER_BYTE)
+ goto found_first;
+ if (tmp)
+ goto found_middle;
+ size -= XC_BITS_PER_BYTE;
+ result += XC_BITS_PER_BYTE;
+ }
+ while (size & ~(XC_BITS_PER_BYTE-1)) {
+ if ((tmp = *(p++)))
+ goto found_middle;
+ result += XC_BITS_PER_BYTE;
+ size -= XC_BITS_PER_BYTE;
+ }
+ if (!size)
+ return result;
+ tmp = *p;
+
+found_first:
+ tmp &= (0xff >> (XC_BITS_PER_BYTE - size));
+ if (!tmp)
+ return result+size;
+found_middle:
+ return result + __xc_ffs(tmp);
+}
+
+void __xc_bitmap_and(uint8_t *dp, uint8_t *s1p, uint8_t *s2p, int nbits)
+{
+ int k;
+ int nr = XC_BITS_TO_BYTES(nbits);
+
+ for (k=0; k<nr; k++)
+ dp[k] = s1p[k] & s2p[k];
+}
+
+void __xc_bitmap_or(uint8_t *dp, uint8_t *s1p, uint8_t *s2p, int nbits)
+{
+ int k;
+ int nr = XC_BITS_TO_BYTES(nbits);
+
+ for (k=0; k<nr; k++)
+ dp[k] = s1p[k] | s2p[k];
+}
+
+void __xc_bitmap_xor(uint8_t *dp, uint8_t *s1p, uint8_t *s2p, int nbits)
+{
+ int k;
+ int nr = XC_BITS_TO_BYTES(nbits);
+
+ for (k=0; k<nr; k++)
+ dp[k] = s1p[k] ^ s2p[k];
+}
+
+void __xc_bitmap_andnot(uint8_t *dp, uint8_t *s1p, uint8_t *s2p, int nbits)
+{
+ int k;
+ int nr = XC_BITS_TO_BYTES(nbits);
+
+ for (k=0; k<nr; k++)
+ dp[k] = s1p[k] & ~s2p[k];
+}
+
+void __xc_bitmap_complement(uint8_t *dp, uint8_t *sp, int nbits)
+{
+ int k, lim = nbits/XC_BITS_PER_BYTE;
+ for (k=0; k<lim; k++)
+ dp[k] = ~sp[k];
+
+ if (nbits % XC_BITS_PER_BYTE)
+ dp[k] = ~sp[k] & XC_BITMAP_LAST_BYTE_MASK(nbits);
+}
+
+int __xc_bitmap_equal(uint8_t *s1p, uint8_t *s2p, int nbits)
+{
+ int k, lim = nbits/XC_BITS_PER_BYTE;
+ for (k=0; k<lim; k++)
+ if (s1p[k] != s2p[k])
+ return 0;
+
+ if (nbits % XC_BITS_PER_BYTE)
+ if ((s1p[k] ^ s2p[k]) & XC_BITMAP_LAST_BYTE_MASK(nbits))
+ return 0;
+
+ return 1;
+}
+
+int __xc_bitmap_intersects(uint8_t *s1p, uint8_t *s2p, int nbits)
+{
+ int k, lim = nbits/XC_BITS_PER_BYTE;
+ for (k=0; k<lim; k++)
+ if (s1p[k] & s2p[k])
+ return 1;
+
+ if (nbits % XC_BITS_PER_BYTE)
+ if ((s1p[k] & s2p[k]) & XC_BITMAP_LAST_BYTE_MASK(nbits))
+ return 1;
+
+ return 0;
+}
+
+int __xc_bitmap_subset(uint8_t *s1p, uint8_t *s2p, int nbits)
+{
+ int k, lim = nbits/XC_BITS_PER_BYTE;
+ for (k=0; k<lim; k++)
+ if (s1p[k] & ~s2p[k])
+ return 0;
+
+ if (nbits % XC_BITS_PER_BYTE)
+ if ((s1p[k] & ~s2p[k]) & XC_BITMAP_LAST_BYTE_MASK(nbits))
+ return 0;
+
+ return 1;
+}
+
+int __xc_bitmap_empty(uint8_t *sp, int nbits)
+{
+ int k, lim = nbits/XC_BITS_PER_BYTE;
+ for (k=0; k<lim; k++)
+ if (sp[k])
+ return 0;
+
+ if (nbits % XC_BITS_PER_BYTE)
+ if (sp[k] & XC_BITMAP_LAST_BYTE_MASK(nbits))
+ return 0;
+
+ return 1;
+}
+
+int __xc_bitmap_full(uint8_t *sp, int nbits)
+{
+ int k, lim = nbits/XC_BITS_PER_BYTE;
+ for (k=0; k<lim; k++)
+ if (~sp[k] & XC_BITMAP_BYTE_MASK)
+ return 0;
+
+ if (nbits % XC_BITS_PER_BYTE)
+ if (~sp[k] & XC_BITMAP_LAST_BYTE_MASK(nbits))
+ return 0;
+
+ return 1;
+}
+
+static inline uint8_t hweight8(uint8_t w)
+{
+ uint8_t res = (w & 0x55) + ((w >> 1) & 0x55);
+ res = (res & 0x33) + ((res >> 2) & 0x33);
+ return (res & 0x0F) + ((res >> 4) & 0x0F);
+}
+
+int __xc_bitmap_weight(const uint8_t *sp, int nbits)
+{
+ int k, w = 0, lim = nbits/XC_BITS_PER_BYTE;
+
+ for (k=0; k <lim; k++)
+ w += hweight8(sp[k]);
+
+ if (nbits % XC_BITS_PER_BYTE)
+ w += hweight8(sp[k] & XC_BITMAP_LAST_BYTE_MASK(nbits));
+
+ return w;
+}
+
+/* xenctl_cpumask print functions */
+#define CHUNKSZ 8
+#define roundup_power2(val,modulus) (((val) + (modulus) - 1) & ~((modulus) - 1))
+
+int xc_bitmap_snprintf(char *buf, unsigned int buflen,
+ const uint8_t *maskp, int nmaskbits)
+{
+ int i, word, bit, len = 0;
+ unsigned long val;
+ const char *sep = "";
+ int chunksz;
+ uint8_t chunkmask;
+
+ chunksz = nmaskbits & (CHUNKSZ - 1);
+ if (chunksz == 0)
+ chunksz = CHUNKSZ;
+
+ i = roundup_power2(nmaskbits, CHUNKSZ) - CHUNKSZ;
+ for (; i >= 0; i -= CHUNKSZ) {
+ chunkmask = ((1ULL << chunksz) - 1);
+ word = i / XC_BITS_PER_BYTE;
+ bit = i % XC_BITS_PER_BYTE;
+ val = (maskp[word] >> bit) & chunkmask;
+ len += snprintf(buf+len, buflen-len, "%s%0*lx", sep,
+ (chunksz+3)/4, val);
+ chunksz = CHUNKSZ;
+ sep = ",";
+ }
+ return len;
+}
+
+
diff -r 04cb0829d138 tools/libxc/xc_bitmap.h
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/tools/libxc/xc_bitmap.h Tue Mar 23 12:29:26 2010 -0400
@@ -0,0 +1,193 @@
+#ifndef __XENCTL_BITMAP_H
+#define __XENCTL_BITMAP_H
+
+#include <stdint.h>
+#include <string.h>
+
+#define XC_BITS_PER_BYTE 8
+#define XC_BITS_TO_BYTES(bits) \
+ (((bits)+XC_BITS_PER_BYTE-1)/XC_BITS_PER_BYTE)
+#define XC_BITMAP_BIT(nr) (1 << (nr))
+#define XC_BITMAP_BIT_MASK(nr) (1 << ((nr) % XC_BITS_PER_BYTE))
+#define XC_BITMAP_BYTE(nr) ((nr) / XC_BITS_PER_BYTE)
+
+#define XC_BITMAP_BYTE_MASK (0xff)
+#define XC_BITMAP_LAST_BYTE_MASK(nbits) \
+ (((nbits) % XC_BITS_PER_BYTE) ? \
+ ((1<<((nbits) % XC_BITS_PER_BYTE))-1) : \
+ XC_BITMAP_BYTE_MASK)
+
+#define xc_bitmap_find_first_bit(addr, size) \
+ xc_bitmap_find_next_bit(addr, size, 0)
+extern int
+xc_bitmap_find_next_bit(const uint8_t *addr, uint32_t size, uint32_t offset);
+extern int
+xc_bitmap_find_next_bit(const uint8_t *addr, uint32_t size, uint32_t offset);
+
+#define xc_bitmap_find_first_zero_bit(addr, size) \
+ xc_bitmap_find_next_zero_bit(addr, size, 0)
+extern int xc_bitmap_find_next_zero_bit(
+ const uint8_t *addr, uint32_t size, uint32_t offset);
+
+extern void __xc_bitmap_and(uint8_t *dp, uint8_t *s1p, uint8_t *s2p, int nbits);
+extern void __xc_bitmap_or(uint8_t *dp, uint8_t *s1p, uint8_t *s2p, int nbits);
+extern void __xc_bitmap_xor(uint8_t *dp, uint8_t *s1p, uint8_t *s2p, int nbits);
+extern void
+__xc_bitmap_andnot(uint8_t *dp, uint8_t *s1p, uint8_t *s2p, int nbits);
+extern void __xc_bitmap_complement(uint8_t *dp, uint8_t *sp, int nbits);
+extern int __xc_bitmap_equal(uint8_t *s1p, uint8_t *s2p, int nbits);
+extern int __xc_bitmap_intersects(uint8_t *s1p, uint8_t *s2p, int nbits);
+extern int __xc_bitmap_subset(uint8_t *s1p, uint8_t *s2p, int nbits);
+extern int __xc_bitmap_empty(uint8_t *sp, int nbits);
+extern int __xc_bitmap_full(uint8_t *sp, int nbits);
+extern int __xc_bitmap_weight(const uint8_t *sp, int nbits);
+
+extern int xc_bitmap_snprintf(char *buf, unsigned int buflen,
+ const uint8_t *maskp, int nmaskbits);
+
+
+static inline void xc_bitmap_set_bit(int nr, volatile uint8_t *addr)
+{
+ uint8_t mask = XC_BITMAP_BIT_MASK(nr);
+ uint8_t *p = ((uint8_t *)addr) + XC_BITMAP_BYTE(nr);
+ *p |= mask;
+}
+
+static inline void xc_bitmap_clear_bit(int nr, volatile uint8_t *addr)
+{
+ uint8_t mask = XC_BITMAP_BIT_MASK(nr);
+ uint8_t *p = ((uint8_t *)addr) + XC_BITMAP_BYTE(nr);
+ *p &= ~mask;
+}
+
+static inline int xc_bitmap_test_bit(int nr, volatile uint8_t *addr)
+{
+ uint8_t mask = XC_BITMAP_BIT_MASK(nr);
+ uint8_t *p = ((uint8_t *)addr) + XC_BITMAP_BYTE(nr);
+ return *p & mask;
+}
+
+static inline void xc_bitmap_fill(uint8_t *dp, int nbits)
+{
+ size_t nbytes = XC_BITS_TO_BYTES(nbits);
+ if (nbytes > 1)
+ memset(dp, 0xff, nbytes-1);
+ dp[nbytes-1] = XC_BITMAP_LAST_BYTE_MASK(nbits);
+}
+
+static inline void xc_bitmap_zero(uint8_t *dp, int nbits)
+{
+ size_t nbytes = XC_BITS_TO_BYTES(nbits);
+ if (nbytes > 1)
+ memset(dp, 0x00, nbytes-1);
+ dp[nbytes-1] = 0;
+}
+
+
+static inline void
+xc_bitmap_and(uint8_t *dp, uint8_t *s1p, uint8_t *s2p, int nbits)
+{
+ if (nbits <= XC_BITS_PER_BYTE)
+ *dp = *s1p & *s2p;
+ else
+ __xc_bitmap_and(dp, s1p, s2p, nbits);
+}
+
+static inline void
+xc_bitmap_or(uint8_t *dp, uint8_t *s1p, uint8_t *s2p, int nbits)
+{
+ if (nbits <= XC_BITS_PER_BYTE)
+ *dp = *s1p | *s2p;
+ else
+ __xc_bitmap_or(dp, s1p, s2p, nbits);
+}
+
+static inline void
+xc_bitmap_xor(uint8_t *dp, uint8_t *s1p, uint8_t *s2p, int nbits)
+{
+ if (nbits <= XC_BITS_PER_BYTE)
+ *dp = *s1p ^ *s2p;
+ else
+ __xc_bitmap_xor(dp, s1p, s2p, nbits);
+}
+
+static inline void
+xc_bitmap_andnot(uint8_t *dp, uint8_t *s1p, uint8_t *s2p, int nbits)
+{
+ if (nbits <= XC_BITS_PER_BYTE)
+ *dp = *s1p & ~(*s2p);
+ else
+ __xc_bitmap_andnot(dp, s1p, s2p, nbits);
+}
+
+static inline void
+xc_bitmap_complement(uint8_t *dp, uint8_t *sp, int nbits)
+{
+ if (nbits <= XC_BITS_PER_BYTE)
+ *dp = ~(*sp) & XC_BITMAP_LAST_BYTE_MASK(nbits);
+ else
+ __xc_bitmap_complement(dp, sp, nbits);
+}
+
+static inline int
+xc_bitmap_equal(uint8_t *s1p, uint8_t *s2p, int nbits)
+{
+ if (nbits <= XC_BITS_PER_BYTE)
+ return !((*s1p ^ *s2p) & XC_BITMAP_LAST_BYTE_MASK(nbits));
+
+ return __xc_bitmap_equal(s1p, s2p, nbits);
+}
+
+static inline int
+xc_bitmap_intersects(uint8_t *s1p, uint8_t *s2p, int nbits)
+{
+ if (nbits <= XC_BITS_PER_BYTE)
+ return ((*s1p & *s2p) & XC_BITMAP_LAST_BYTE_MASK(nbits));
+
+ return __xc_bitmap_intersects(s1p, s2p, nbits);
+}
+
+static inline int
+xc_bitmap_subset(uint8_t *s1p, uint8_t *s2p, int nbits)
+{
+ if (nbits <= XC_BITS_PER_BYTE)
+ return !((*s1p & ~(*s2p)) & XC_BITMAP_LAST_BYTE_MASK(nbits));
+
+ return __xc_bitmap_subset(s1p, s2p, nbits);
+}
+
+static inline int
+xc_bitmap_empty(uint8_t *sp, int nbits)
+{
+ if (nbits <= XC_BITS_PER_BYTE)
+ return ! (*sp & XC_BITMAP_LAST_BYTE_MASK(nbits));
+ else
+ return __xc_bitmap_empty(sp, nbits);
+}
+
+static inline int
+xc_bitmap_full(uint8_t *sp, int nbits)
+{
+ if (nbits <= XC_BITS_PER_BYTE)
+ return ! (~(*sp) & XC_BITMAP_LAST_BYTE_MASK(nbits));
+ else
+ return __xc_bitmap_full(sp, nbits);
+}
+
+static inline uint32_t
+xc_bitmap_weight(const uint8_t *sp, int nbits)
+{
+ return __xc_bitmap_weight(sp, nbits);
+}
+
+
+static inline void
+xc_bitmap_copy(uint8_t *dp, const uint8_t *sp, int nbits)
+{
+ if (nbits > XC_BITS_PER_BYTE)
+ *dp = *sp;
+ else
+ memcpy(dp, sp, XC_BITS_TO_BYTES(nbits));
+}
+
+#endif
diff -r 04cb0829d138 tools/libxc/xc_cpumap.h
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/tools/libxc/xc_cpumap.h Tue Mar 23 12:29:26 2010 -0400
@@ -0,0 +1,274 @@
+#ifndef __XENCTL_CPUMAP_H
+#define __XENCTL_CPUMAP_H
+
+#include "xc_private.h"
+#include "xc_bitmap.h"
+
+#define xc_cpumap_bits(maskp) \
+ ({ uint8_t *bitmap; \
+ get_xen_guest_handle(bitmap, (maskp)->bitmap); \
+ bitmap; })
+#define xc_cpumap_len(maskp) ((maskp)->nr_cpus)
+
+/* Number of cpus set in the bitmap */
+#define xc_cpumap_num_cpus(mask) xc_cpumap_weight(mask)
+
+/**
+ * xc_cpumap_first - get the first cpu in a xenctl_cpumap
+ * @srcp: the xenctl_cpumap pointer
+ *
+ * Returns >= xc_cpumap_len(srcp) if no cpus set.
+ */
+static inline unsigned int
+xc_cpumap_first(struct xenctl_cpumap *srcp)
+{
+ return xc_bitmap_find_first_bit(xc_cpumap_bits(srcp),
+ xc_cpumap_len(srcp));
+}
+
+/**
+ * xc_cpumap_next - get the next cpu in a xenctl_cpumap
+ * @n: the cpu prior to the place to search (ie. return will be > @n)
+ * @srcp: the xenctl_cpumap pointer
+ *
+ * Returns >= xc_cpumap_len(srcp) if no further cpus set.
+ */
+static inline uint32_t
+xc_cpumap_next(int n, struct xenctl_cpumap *srcp)
+{
+ return xc_bitmap_find_next_bit(xc_cpumap_bits(srcp),
+ xc_cpumap_len(srcp), n+1);
+}
+
+#if 0
+static inline uint32_t
+xc_cpumap_next_zero(int n, struct xenctl_cpumap *srcp)
+{
+ return xc_bitmap_find_next_zero_bit(xc_cpumap_bits(srcp),
+ xc_cpumap_len(srcp), n+1);
+}
+#endif
+
+/**
+ * xc_for_each_cpu - iterate over every cpu in a mask
+ * @cpu: the (optionally unsigned) integer iterator
+ * @mask: the xenctl_cpumap pointer
+ *
+ * After the loop, cpu is >= xc_cpumap_len(mask)
+ */
+#define xc_for_each_cpu(cpu, mask) \
+ __xc_for_each_cpu(cpu, &(mask))
+
+#define __xc_for_each_cpu(cpu, mask) \
+ for ((cpu) = -1; \
+ (cpu) = xc_cpumap_next((cpu), (mask)), \
+ (cpu) < xc_cpumap_len(mask);)
+
+
+#define xc_cpumap_equal(src1, src2) __xc_cpumap_equal(&(src1), &(src2))
+static inline int
+__xc_cpumap_equal(struct xenctl_cpumap *s1p, struct xenctl_cpumap *s2p)
+{
+ return xc_bitmap_equal(xc_cpumap_bits(s1p), xc_cpumap_bits(s2p),
+ xc_cpumap_len(s1p));
+}
+
+#define xc_cpumap_set_cpu(cpu, dst) __xc_cpumap_set_cpu(cpu, &(dst))
+static inline void __xc_cpumap_set_cpu(int cpu, struct xenctl_cpumap *dstp)
+{
+ xc_bitmap_set_bit(cpu, xc_cpumap_bits(dstp));
+}
+
+#define xc_cpumap_clear_cpu(cpu, dst) __xc_cpumap_clear_cpu(cpu, &(dst))
+static inline void __xc_cpumap_clear_cpu(int cpu, struct xenctl_cpumap *dstp)
+{
+ xc_bitmap_clear_bit(cpu, xc_cpumap_bits(dstp));
+}
+
+#define xc_cpumap_test_cpu(cpu, dst) __xc_cpumap_test_cpu(cpu, &(dst))
+static inline int __xc_cpumap_test_cpu(int cpu, struct xenctl_cpumap *dstp)
+{
+ return xc_bitmap_test_bit(cpu, xc_cpumap_bits(dstp));
+}
+
+
+#define xc_cpumap_setall(dst) __xc_cpumap_setall(&(dst))
+static inline void __xc_cpumap_setall(struct xenctl_cpumap *dstp)
+{
+ xc_bitmap_fill(xc_cpumap_bits(dstp), xc_cpumap_len(dstp));
+}
+
+#define xc_cpumap_clearall(dst) __xc_cpumap_clearall(&(dst))
+static inline void __xc_cpumap_clearall(struct xenctl_cpumap *dstp)
+{
+ xc_bitmap_zero(xc_cpumap_bits(dstp), xc_cpumap_len(dstp));
+}
+
+#define xc_cpumap_and(dst, src1, src2) \
+ __xc_cpumap_and(&(dst), &(src1), &(src2))
+static inline void __xc_cpumap_and(struct xenctl_cpumap *dstp,
+ struct xenctl_cpumap *src1p, struct xenctl_cpumap *src2p)
+{
+ xc_bitmap_and(xc_cpumap_bits(dstp), xc_cpumap_bits(src1p),
+ xc_cpumap_bits(src2p), xc_cpumap_len(dstp));
+}
+
+#define xc_cpumap_or(dst, src1, src2) \
+ __xc_cpumap_or(&(dst), &(src1), &(src2))
+static inline void __xc_cpumap_or(struct xenctl_cpumap *dstp,
+ struct xenctl_cpumap *src1p, struct xenctl_cpumap *src2p)
+{
+ xc_bitmap_or(xc_cpumap_bits(dstp), xc_cpumap_bits(src1p),
+ xc_cpumap_bits(src2p), xc_cpumap_len(dstp));
+}
+
+#define xc_cpumap_xor(dst, src1, src2) \
+ __xc_cpumap_xor(&(dst), &(src1), &(src2))
+static inline void __xc_cpumap_xor(struct xenctl_cpumap *dstp,
+ struct xenctl_cpumap *src1p, struct xenctl_cpumap *src2p)
+{
+ xc_bitmap_xor(xc_cpumap_bits(dstp), xc_cpumap_bits(src1p),
+ xc_cpumap_bits(src2p), xc_cpumap_len(dstp));
+}
+
+#define xc_cpumap_andnot(dst, src1, src2) \
+ __xc_cpumap_andnot(&(dst), &(src1), &(src2))
+static inline void xenctl_cpumap_andnot(struct xenctl_cpumap *dstp,
+ struct xenctl_cpumap *src1p, struct xenctl_cpumap *src2p)
+{
+ xc_bitmap_andnot(xc_cpumap_bits(dstp), xc_cpumap_bits(src1p),
+ xc_cpumap_bits(src2p), xc_cpumap_len(dstp));
+}
+
+#define xc_cpumap_complement(dst, src) \
+ __xc_cpumap_complement(&(dst), &(src))
+static inline void __xc_cpumap_complement(struct xenctl_cpumap *dstp,
+ struct xenctl_cpumap *srcp)
+{
+ xc_bitmap_complement(xc_cpumap_bits(dstp), xc_cpumap_bits(srcp),
+ xc_cpumap_len(dstp));
+}
+
+#define xc_cpumap_intersects(src1, src2) \
+ __xc_cpumap_intersects(&(src1), &(src2))
+static inline int __xc_cpumap_intersects(struct xenctl_cpumap *src1p,
+ struct xenctl_cpumap *src2p)
+{
+ return xc_bitmap_intersects(xc_cpumap_bits(src1p), xc_cpumap_bits(src2p),
+ xc_cpumap_len(src1p));
+}
+
+#define xc_cpumap_subset(src1, src2) \
+ __xc_cpumap_subset(&(src1), &(src2))
+static inline int __xc_cpumap_subset(struct xenctl_cpumap *src1p,
+ struct xenctl_cpumap *src2p)
+{
+ return xc_bitmap_subset(xc_cpumap_bits(src1p), xc_cpumap_bits(src2p),
+ xc_cpumap_len(src1p));
+}
+
+#define xc_cpumap_empty(src) __xc_cpumap_empty(&(src))
+static inline int __xc_cpumap_empty(struct xenctl_cpumap *srcp)
+{
+ return xc_bitmap_empty(xc_cpumap_bits(srcp), xc_cpumap_len(srcp));
+}
+
+#define xc_cpumap_full(src) __xc_cpumap_full(&(src))
+static inline int __xc_cpumap_full(struct xenctl_cpumap *srcp)
+{
+ return xc_bitmap_full(xc_cpumap_bits(srcp), xc_cpumap_len(srcp));
+}
+
+#define xc_cpumap_weight(src) __xc_cpumap_weight(&(src))
+static inline uint32_t __xc_cpumap_weight(struct xenctl_cpumap *srcp)
+{
+ return xc_bitmap_weight(xc_cpumap_bits(srcp), xc_cpumap_len(srcp));
+}
+
+#define xc_cpumap_copy(dst, src) __xc_cpumap_copy(&(dst), &(src))
+static inline void __xc_cpumap_copy(struct xenctl_cpumap *dstp,
+ struct xenctl_cpumap *srcp)
+{
+ xc_bitmap_copy(xc_cpumap_bits(dstp), xc_cpumap_bits(srcp),
+ xc_cpumap_len(dstp));
+}
+
+#if 0
+#define XC_CPUMASK_LAST_BYTE XC_BITMAP_LAST_BYTE_MASK(XENCTL_NR_CPUS)
+
+#define XC_CPUMASK_ALL \
+/*(xenctl_cpumap)*/ { { \
+ [0 ... XC_BITS_TO_BYTES(XENCTL_NR_CPUS)-2] = 0xff, \
+ [XC_BITS_TO_BYTES(XENCTL_NR_CPUS)-1] = XC_CPUMASK_LAST_BYTE \
+} }
+
+#define XC_CPUMASK_NONE \
+/*(xenctl_cpumap)*/ { { \
+ [0 ... XC_BITS_TO_BYTES(XENCTL_NR_CPUS)-1] = 0 \
+} }
+#endif
+
+#define xc_cpumap_snprintf(buf, len, src) \
+ __xc_cpumap_snprintf((buf), (len), &(src), XENCTL_NR_CPUS)
+static inline int __xc_cpumap_snprintf(char *buf, int len,
+ const struct xenctl_cpumap *srcp, int nbits)
+{
+ return xc_bitmap_snprintf(buf, len, xc_cpumap_bits(srcp), nbits);
+}
+
+/***********************************************************************/
+
+static inline int
+xc_cpumap_allocz_bitmap(int xc_handle, struct xenctl_cpumap *map)
+{
+ int nr_cpus;
+ uint8_t *bitmap;
+ xc_physinfo_t pinfo = { 0 };
+
+ if (xc_physinfo(xc_handle, &pinfo))
+ goto failed;
+
+ nr_cpus = pinfo.nr_cpus;
+ if (!(bitmap = malloc(XC_BITS_TO_BYTES(nr_cpus))))
+ goto failed;
+
+ xc_bitmap_zero(bitmap, nr_cpus);
+ map->nr_cpus = pinfo.nr_cpus;
+ set_xen_guest_handle(map->bitmap, bitmap);
+ return 0;
+failed:
+ return -1;
+}
+
+static inline void
+xc_cpumap_free_bitmap(struct xenctl_cpumap *map)
+{
+ uint8_t *bitmap;
+ get_xen_guest_handle(bitmap, map->bitmap);
+ free(bitmap);
+}
+
+static inline int
+xc_cpumap_lock_pages(struct xenctl_cpumap *map)
+{
+ uint8_t *bitmap;
+ uint32_t nr_bytes = XC_BITS_TO_BYTES(map->nr_cpus);
+
+ get_xen_guest_handle(bitmap, map->bitmap);
+
+ if (lock_pages(bitmap, nr_bytes))
+ return -1;
+ return 0;
+}
+
+static inline void
+xc_cpumap_unlock_pages(struct xenctl_cpumap *map)
+{
+ uint8_t *bitmap;
+ uint32_t nr_bytes = XC_BITS_TO_BYTES(map->nr_cpus);
+
+ get_xen_guest_handle(bitmap, map->bitmap);
+ unlock_pages(bitmap, nr_bytes);
+}
+
+#endif /* __XENCTL_CPUMAP_H */
diff -r 04cb0829d138 tools/libxc/xc_domain.c
--- a/tools/libxc/xc_domain.c Wed Mar 17 14:10:43 2010 +0000
+++ b/tools/libxc/xc_domain.c Tue Mar 23 12:29:26 2010 -0400
@@ -8,6 +8,7 @@
#include "xc_private.h"
#include "xg_save_restore.h"
+#include "xc_cpumap.h"
#include <xen/memory.h>
#include <xen/hvm/hvm_op.h>
@@ -98,28 +99,17 @@
int xc_vcpu_setaffinity(int xc_handle,
uint32_t domid,
int vcpu,
- uint64_t *cpumap, int cpusize)
+ struct xenctl_cpumap *cpumap)
{
DECLARE_DOMCTL;
int ret = -1;
- uint8_t *local = malloc(cpusize);
- if(local == NULL)
- {
- PERROR("Could not alloc memory for Xen hypercall");
- goto out;
- }
domctl.cmd = XEN_DOMCTL_setvcpuaffinity;
domctl.domain = (domid_t)domid;
- domctl.u.vcpuaffinity.vcpu = vcpu;
+ domctl.u.vcpuaffinity.vcpu = vcpu;
+ domctl.u.vcpuaffinity.cpumap = *cpumap;
- bitmap_64_to_byte(local, cpumap, cpusize * 8);
-
- set_xen_guest_handle(domctl.u.vcpuaffinity.cpumap.bitmap, local);
-
- domctl.u.vcpuaffinity.cpumap.nr_cpus = cpusize * 8;
-
- if ( lock_pages(local, cpusize) != 0 )
+ if (xc_cpumap_lock_pages(cpumap))
{
PERROR("Could not lock memory for Xen hypercall");
goto out;
@@ -127,10 +117,9 @@
ret = do_domctl(xc_handle, &domctl);
- unlock_pages(local, cpusize);
+ xc_cpumap_unlock_pages(cpumap);
out:
- free(local);
return ret;
}
@@ -138,28 +127,18 @@
int xc_vcpu_getaffinity(int xc_handle,
uint32_t domid,
int vcpu,
- uint64_t *cpumap,
- int cpusize)
+ struct xenctl_cpumap *cpumap)
{
DECLARE_DOMCTL;
int ret = -1;
- uint8_t * local = malloc(cpusize);
-
- if(local == NULL)
- {
- PERROR("Could not alloc memory for Xen hypercall");
- goto out;
- }
domctl.cmd = XEN_DOMCTL_getvcpuaffinity;
domctl.domain = (domid_t)domid;
domctl.u.vcpuaffinity.vcpu = vcpu;
+ domctl.u.vcpuaffinity.cpumap = *cpumap;
- set_xen_guest_handle(domctl.u.vcpuaffinity.cpumap.bitmap, local);
- domctl.u.vcpuaffinity.cpumap.nr_cpus = cpusize * 8;
-
- if ( lock_pages(local, sizeof(local)) != 0 )
+ if (xc_cpumap_lock_pages(cpumap))
{
PERROR("Could not lock memory for Xen hypercall");
goto out;
@@ -167,10 +146,8 @@
ret = do_domctl(xc_handle, &domctl);
- unlock_pages(local, sizeof (local));
- bitmap_byte_to_64(cpumap, local, cpusize * 8);
+ xc_cpumap_unlock_pages(cpumap);
out:
- free(local);
return ret;
}
diff -r 04cb0829d138 tools/libxc/xenctrl.h
--- a/tools/libxc/xenctrl.h Wed Mar 17 14:10:43 2010 +0000
+++ b/tools/libxc/xenctrl.h Tue Mar 23 12:29:26 2010 -0400
@@ -309,13 +309,11 @@
int xc_vcpu_setaffinity(int xc_handle,
uint32_t domid,
int vcpu,
- uint64_t *cpumap,
- int cpusize);
+ struct xenctl_cpumap *cpumap);
int xc_vcpu_getaffinity(int xc_handle,
uint32_t domid,
int vcpu,
- uint64_t *cpumap,
- int cpusize);
+ struct xenctl_cpumap *cpumap);
/**
* This function will return information about one or more domains. It is
diff -r 04cb0829d138 tools/python/xen/lowlevel/xc/xc.c
--- a/tools/python/xen/lowlevel/xc/xc.c Wed Mar 17 14:10:43 2010 +0000
+++ b/tools/python/xen/lowlevel/xc/xc.c Tue Mar 23 12:29:26 2010 -0400
@@ -23,6 +23,7 @@
#include "xc_dom.h"
#include <xen/hvm/hvm_info_table.h>
#include <xen/hvm/params.h>
+#include "xc_cpumap.h"
#define ARRAY_SIZE(x) (sizeof(x) / sizeof((x)[0]))
@@ -215,12 +216,8 @@
{
uint32_t dom;
int vcpu = 0, i;
- uint64_t *cpumap;
+ struct xenctl_cpumap cpumap;
PyObject *cpulist = NULL;
- int nr_cpus, size;
- xc_physinfo_t info;
- xc_cpu_to_node_t map[1];
- uint64_t cpumap_size = sizeof(cpumap);
static char *kwd_list[] = { "domid", "vcpu", "cpumap", NULL };
@@ -229,40 +226,26 @@
&dom, &vcpu, &cpulist) )
return NULL;
- set_xen_guest_handle(info.cpu_to_node, map);
- info.max_cpu_id = 1;
- if ( xc_physinfo(self->xc_handle, &info) != 0 )
+ if (xc_cpumap_allocz_bitmap(self->xc_handle, &cpumap))
return pyxc_error_to_exception();
-
- nr_cpus = info.nr_cpus;
-
- size = (nr_cpus + cpumap_size * 8 - 1)/ (cpumap_size * 8);
- cpumap = malloc(cpumap_size * size);
- if(cpumap == NULL)
- return pyxc_error_to_exception();
-
if ( (cpulist != NULL) && PyList_Check(cpulist) )
{
- for ( i = 0; i < size; i++)
- {
- cpumap[i] = 0ULL;
- }
for ( i = 0; i < PyList_Size(cpulist); i++ )
{
long cpu = PyInt_AsLong(PyList_GetItem(cpulist, i));
- *(cpumap + cpu / (cpumap_size * 8)) |= (uint64_t)1 << (cpu % (cpumap_size * 8));
+ xc_cpumap_set_cpu(cpu, cpumap);
}
}
- if ( xc_vcpu_setaffinity(self->xc_handle, dom, vcpu, cpumap, size * cpumap_size) != 0 )
+ if (xc_vcpu_setaffinity(self->xc_handle, dom, vcpu, &cpumap))
{
- free(cpumap);
+ xc_cpumap_free_bitmap(&cpumap);
return pyxc_error_to_exception();
}
Py_INCREF(zero);
- free(cpumap);
+ xc_cpumap_free_bitmap(&cpumap);
return zero;
}
@@ -381,11 +364,7 @@
uint32_t dom, vcpu = 0;
xc_vcpuinfo_t info;
int rc, i;
- uint64_t *cpumap;
- int nr_cpus, size;
- xc_physinfo_t pinfo = { 0 };
- xc_cpu_to_node_t map[1];
- uint64_t cpumap_size = sizeof(cpumap);
+ struct xenctl_cpumap cpumap;
static char *kwd_list[] = { "domid", "vcpu", NULL };
@@ -393,23 +372,14 @@
&dom, &vcpu) )
return NULL;
- set_xen_guest_handle(pinfo.cpu_to_node, map);
- pinfo.max_cpu_id = 1;
- if ( xc_physinfo(self->xc_handle, &pinfo) != 0 )
+ if ( xc_cpumap_allocz_bitmap(self->xc_handle, &cpumap) )
return pyxc_error_to_exception();
- nr_cpus = pinfo.nr_cpus;
- rc = xc_vcpu_getinfo(self->xc_handle, dom, vcpu, &info);
- if ( rc < 0 )
+ if ((rc = xc_vcpu_getinfo(self->xc_handle, dom, vcpu, &info)) < 0)
return pyxc_error_to_exception();
- size = (nr_cpus + cpumap_size * 8 - 1)/ (cpumap_size * 8);
- if((cpumap = malloc(cpumap_size * size)) == NULL)
- return pyxc_error_to_exception();
-
- rc = xc_vcpu_getaffinity(self->xc_handle, dom, vcpu, cpumap, cpumap_size * size);
- if ( rc < 0 )
+ if ((rc = xc_vcpu_getaffinity(self->xc_handle, dom, vcpu, &cpumap)) < 0)
{
- free(cpumap);
+ xc_cpumap_free_bitmap(&cpumap);
return pyxc_error_to_exception();
}
@@ -421,18 +391,15 @@
"cpu", info.cpu);
cpulist = PyList_New(0);
- for ( i = 0; i < size * cpumap_size * 8; i++ )
+ xc_for_each_cpu(i, cpumap)
{
- if (*(cpumap + i / (cpumap_size * 8)) & 1 ) {
- PyObject *pyint = PyInt_FromLong(i);
- PyList_Append(cpulist, pyint);
- Py_DECREF(pyint);
- }
- *(cpumap + i / (cpumap_size * 8)) >>= 1;
+ PyObject *pyint = PyInt_FromLong(i);
+ PyList_Append(cpulist, pyint);
+ Py_DECREF(pyint);
}
PyDict_SetItemString(info_dict, "cpumap", cpulist);
Py_DECREF(cpulist);
- free(cpumap);
+ xc_cpumap_free_bitmap(&cpumap);
return info_dict;
}
[-- Attachment #3: Type: text/plain, Size: 138 bytes --]
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel
next prev parent reply other threads:[~2010-03-23 16:40 UTC|newest]
Thread overview: 12+ messages / expand[flat|nested] mbox.gz Atom feed top
2010-03-22 3:33 [PATCH] libxc bitmap utils and vcpu-affinity Dulloor
2010-03-22 7:30 ` Keir Fraser
2010-03-22 17:44 ` Dulloor
2010-03-23 10:10 ` Jan Beulich
2010-03-23 11:05 ` Keir Fraser
2010-03-23 16:40 ` Dulloor [this message]
2010-03-23 16:41 ` Dulloor
2010-03-23 16:55 ` Dulloor
2010-03-30 14:42 ` Fwd: " Dulloor
2010-03-30 15:16 ` Keir Fraser
2010-03-30 16:05 ` Dulloor
2010-03-30 16:27 ` Keir Fraser
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=940bcfd21003230940w7c7d87eas7e1300cfdb633c4c@mail.gmail.com \
--to=dulloor@gmail.com \
--cc=JBeulich@novell.com \
--cc=keir.fraser@eu.citrix.com \
--cc=xen-devel@lists.xensource.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).