* [Patch] use full-size cpumask for vcpu-pin
@ 2010-08-02 6:40 Juergen Gross
2010-08-10 14:11 ` Ian Jackson
2010-08-13 12:31 ` Ian Jackson
0 siblings, 2 replies; 13+ messages in thread
From: Juergen Gross @ 2010-08-02 6:40 UTC (permalink / raw)
To: xen-devel@lists.xensource.com
[-- Attachment #1: Type: text/plain, Size: 1038 bytes --]
Hi,
attached patch solves a problem with vcpu-pinning and hot-plug of cpus:
If a vcpu is unpinned via
xl vcpu-pin <domain> <vcpu> all
on a system with 64 cpus and later other cpus are plugged in, this vcpu will
be restricted to the first 64 cpus of the system.
The reason is the allocation of the cpumap for pinning: the size is only for
the ACTUAL number of physical cpus in the system, not the possible number.
The solution is to allocate a cpumap for up to NR_CPUS.
Repairing xm vcpu-pin is much harder and not covered by this patch, but the
problem can be avoided by calling
xm vcpu-pin <domain> <vcpu> 0-255
instead (all is hard-wired to 0-63 now).
Juergen
--
Juergen Gross Principal Developer Operating Systems
TSP ES&S SWE OS6 Telephone: +49 (0) 89 3222 2967
Fujitsu Technology Solutions e-mail: juergen.gross@ts.fujitsu.com
Domagkstr. 28 Internet: ts.fujitsu.com
D-80807 Muenchen Company details: ts.fujitsu.com/imprint.html
[-- Attachment #2: cpumask.patch --]
[-- Type: text/x-patch, Size: 4701 bytes --]
Signed-off-by: juergen.gross@ts.fujitsu.com
diff -r 3263d0ff9476 tools/libxl/libxl.c
--- a/tools/libxl/libxl.c Thu Jul 29 16:53:40 2010 +0100
+++ b/tools/libxl/libxl.c Mon Aug 02 08:27:03 2010 +0200
@@ -2479,6 +2479,7 @@ int libxl_get_physinfo(libxl_ctx *ctx, l
physinfo->max_cpu_id = xcphysinfo.max_cpu_id;
physinfo->nr_cpus = xcphysinfo.nr_cpus;
physinfo->cpu_khz = xcphysinfo.cpu_khz;
+ physinfo->max_phys_cpus = xcphysinfo.max_phys_cpus;
physinfo->total_pages = xcphysinfo.total_pages;
physinfo->free_pages = xcphysinfo.free_pages;
physinfo->scrub_pages = xcphysinfo.scrub_pages;
@@ -2550,7 +2551,7 @@ libxl_vcpuinfo *libxl_list_vcpu(libxl_ct
XL_LOG_ERRNO(ctx, XL_LOG_ERROR, "getting physinfo");
return NULL;
}
- *cpusize = physinfo.max_cpu_id + 1;
+ *cpusize = physinfo.max_phys_cpus + 1;
ptr = libxl_calloc(ctx, domaininfo.max_vcpu_id + 1, sizeof (libxl_vcpuinfo));
if (!ptr) {
return NULL;
diff -r 3263d0ff9476 tools/libxl/libxl.h
--- a/tools/libxl/libxl.h Thu Jul 29 16:53:40 2010 +0100
+++ b/tools/libxl/libxl.h Mon Aug 02 08:27:03 2010 +0200
@@ -581,6 +581,7 @@ typedef struct {
uint32_t max_cpu_id;
uint32_t nr_cpus;
uint32_t cpu_khz;
+ uint32_t max_phys_cpus;
uint64_t total_pages;
uint64_t free_pages;
diff -r 3263d0ff9476 tools/libxl/xl_cmdimpl.c
--- a/tools/libxl/xl_cmdimpl.c Thu Jul 29 16:53:40 2010 +0100
+++ b/tools/libxl/xl_cmdimpl.c Mon Aug 02 08:27:03 2010 +0200
@@ -3297,7 +3297,7 @@ void vcpupin(char *d, const char *vcpu,
goto vcpupin_out1;
}
- cpumap = calloc(physinfo.max_cpu_id + 1, sizeof (uint64_t));
+ cpumap = calloc(physinfo.max_phys_cpus + 1, sizeof (uint64_t));
if (!cpumap) {
goto vcpupin_out1;
}
@@ -3325,12 +3325,12 @@ void vcpupin(char *d, const char *vcpu,
}
}
else {
- memset(cpumap, -1, sizeof (uint64_t) * (physinfo.max_cpu_id + 1));
+ memset(cpumap, -1, sizeof (uint64_t) * (physinfo.max_phys_cpus + 1));
}
if (vcpuid != -1) {
if (libxl_set_vcpuaffinity(&ctx, domid, vcpuid,
- cpumap, physinfo.max_cpu_id + 1) == -1) {
+ cpumap, physinfo.max_phys_cpus + 1) == -1) {
fprintf(stderr, "Could not set affinity for vcpu `%u'.\n", vcpuid);
}
}
@@ -3341,7 +3341,7 @@ void vcpupin(char *d, const char *vcpu,
}
for (; nb_vcpu > 0; --nb_vcpu, ++vcpuinfo) {
if (libxl_set_vcpuaffinity(&ctx, domid, vcpuinfo->vcpuid,
- cpumap, physinfo.max_cpu_id + 1) == -1) {
+ cpumap, physinfo.max_phys_cpus + 1) == -1) {
fprintf(stderr, "libxl_list_vcpu failed on vcpu `%u'.\n", vcpuinfo->vcpuid);
}
}
diff -r 3263d0ff9476 tools/python/xen/lowlevel/xc/xc.c
--- a/tools/python/xen/lowlevel/xc/xc.c Thu Jul 29 16:53:40 2010 +0100
+++ b/tools/python/xen/lowlevel/xc/xc.c Mon Aug 02 08:27:03 2010 +0200
@@ -241,7 +241,7 @@ static PyObject *pyxc_vcpu_setaffinity(X
if ( xc_physinfo(self->xc_handle, &info) != 0 )
return pyxc_error_to_exception(self->xc_handle);
- nr_cpus = info.nr_cpus;
+ nr_cpus = info.max_phys_cpus;
size = (nr_cpus + cpumap_size * 8 - 1)/ (cpumap_size * 8);
cpumap = malloc(cpumap_size * size);
@@ -400,7 +400,7 @@ static PyObject *pyxc_vcpu_getinfo(XcObj
if ( xc_physinfo(self->xc_handle, &pinfo) != 0 )
return pyxc_error_to_exception(self->xc_handle);
- nr_cpus = pinfo.nr_cpus;
+ nr_cpus = pinfo.max_phys_cpus;
rc = xc_vcpu_getinfo(self->xc_handle, dom, vcpu, &info);
if ( rc < 0 )
diff -r 3263d0ff9476 xen/arch/x86/sysctl.c
--- a/xen/arch/x86/sysctl.c Thu Jul 29 16:53:40 2010 +0100
+++ b/xen/arch/x86/sysctl.c Mon Aug 02 08:27:03 2010 +0200
@@ -68,6 +68,7 @@ long arch_do_sysctl(
pi->free_pages = avail_domheap_pages();
pi->scrub_pages = 0;
pi->cpu_khz = cpu_khz;
+ pi->max_phys_cpus = NR_CPUS;
memcpy(pi->hw_cap, boot_cpu_data.x86_capability, NCAPINTS*4);
if ( hvm_enabled )
pi->capabilities |= XEN_SYSCTL_PHYSCAP_hvm;
diff -r 3263d0ff9476 xen/include/public/sysctl.h
--- a/xen/include/public/sysctl.h Thu Jul 29 16:53:40 2010 +0100
+++ b/xen/include/public/sysctl.h Mon Aug 02 08:27:03 2010 +0200
@@ -96,6 +96,7 @@ struct xen_sysctl_physinfo {
uint32_t nr_cpus, max_cpu_id;
uint32_t nr_nodes, max_node_id;
uint32_t cpu_khz;
+ uint32_t max_phys_cpus;
uint64_aligned_t total_pages;
uint64_aligned_t free_pages;
uint64_aligned_t scrub_pages;
[-- Attachment #3: Type: text/plain, Size: 138 bytes --]
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [Patch] use full-size cpumask for vcpu-pin
2010-08-02 6:40 [Patch] use full-size cpumask for vcpu-pin Juergen Gross
@ 2010-08-10 14:11 ` Ian Jackson
2010-08-10 14:27 ` Keir Fraser
2010-08-11 4:32 ` Juergen Gross
2010-08-13 12:31 ` Ian Jackson
1 sibling, 2 replies; 13+ messages in thread
From: Ian Jackson @ 2010-08-10 14:11 UTC (permalink / raw)
To: Juergen Gross; +Cc: xen-devel@lists.xensource.com
Juergen Gross writes ("[Xen-devel] [Patch] use full-size cpumask for vcpu-pin"):
> The reason is the allocation of the cpumap for pinning: the size is only for
> the ACTUAL number of physical cpus in the system, not the possible number.
> The solution is to allocate a cpumap for up to NR_CPUS.
Thanks for this patch. However, this doesn't seem to be even slightly
backwards-compatible, because it changes the layout of the physinfo
sysctl response.
Ian.
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [Patch] use full-size cpumask for vcpu-pin
2010-08-10 14:11 ` Ian Jackson
@ 2010-08-10 14:27 ` Keir Fraser
2010-08-11 4:32 ` Juergen Gross
1 sibling, 0 replies; 13+ messages in thread
From: Keir Fraser @ 2010-08-10 14:27 UTC (permalink / raw)
To: Ian Jackson, Juergen Gross; +Cc: xen-devel@lists.xensource.com
On 10/08/2010 15:11, "Ian Jackson" <Ian.Jackson@eu.citrix.com> wrote:
> Juergen Gross writes ("[Xen-devel] [Patch] use full-size cpumask for
> vcpu-pin"):
>> The reason is the allocation of the cpumap for pinning: the size is only for
>> the ACTUAL number of physical cpus in the system, not the possible number.
>> The solution is to allocate a cpumap for up to NR_CPUS.
>
> Thanks for this patch. However, this doesn't seem to be even slightly
> backwards-compatible, because it changes the layout of the physinfo
> sysctl response.
The domctl and sysctl hypercalls do not need to be backward compatible
across major Xen releases. All other hypercalls absolutely do.
-- Keir
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [Patch] use full-size cpumask for vcpu-pin
2010-08-10 14:11 ` Ian Jackson
2010-08-10 14:27 ` Keir Fraser
@ 2010-08-11 4:32 ` Juergen Gross
1 sibling, 0 replies; 13+ messages in thread
From: Juergen Gross @ 2010-08-11 4:32 UTC (permalink / raw)
To: Ian Jackson; +Cc: xen-devel@lists.xensource.com
On 08/10/10 16:11, Ian Jackson wrote:
> Juergen Gross writes ("[Xen-devel] [Patch] use full-size cpumask for vcpu-pin"):
>> The reason is the allocation of the cpumap for pinning: the size is only for
>> the ACTUAL number of physical cpus in the system, not the possible number.
>> The solution is to allocate a cpumap for up to NR_CPUS.
>
> Thanks for this patch. However, this doesn't seem to be even slightly
> backwards-compatible, because it changes the layout of the physinfo
> sysctl response.
No, it doesn't change the layout. The new structure member just fills a hole
which was already there due to alignment of the next member.
Juergen
--
Juergen Gross Principal Developer Operating Systems
TSP ES&S SWE OS6 Telephone: +49 (0) 89 3222 2967
Fujitsu Technology Solutions e-mail: juergen.gross@ts.fujitsu.com
Domagkstr. 28 Internet: ts.fujitsu.com
D-80807 Muenchen Company details: ts.fujitsu.com/imprint.html
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [Patch] use full-size cpumask for vcpu-pin
2010-08-02 6:40 [Patch] use full-size cpumask for vcpu-pin Juergen Gross
2010-08-10 14:11 ` Ian Jackson
@ 2010-08-13 12:31 ` Ian Jackson
2010-08-13 12:58 ` Keir Fraser
1 sibling, 1 reply; 13+ messages in thread
From: Ian Jackson @ 2010-08-13 12:31 UTC (permalink / raw)
To: Juergen Gross; +Cc: xen-devel@lists.xensource.com
Juergen Gross writes ("[Xen-devel] [Patch] use full-size cpumask for vcpu-pin"):
> attached patch solves a problem with vcpu-pinning and hot-plug of cpus:
Thanks. This is a mixed tools/hypervisor patch. We've discussed it
and it seems good to me. Keir, would you care to apply it, or would
you like it to go through the tools tree ?
Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
Ian.
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [Patch] use full-size cpumask for vcpu-pin
2010-08-13 12:31 ` Ian Jackson
@ 2010-08-13 12:58 ` Keir Fraser
2010-08-13 13:11 ` Ian Jackson
0 siblings, 1 reply; 13+ messages in thread
From: Keir Fraser @ 2010-08-13 12:58 UTC (permalink / raw)
To: Ian Jackson, Juergen Gross; +Cc: xen-devel@lists.xensource.com
On 13/08/2010 13:31, "Ian Jackson" <Ian.Jackson@eu.citrix.com> wrote:
> Juergen Gross writes ("[Xen-devel] [Patch] use full-size cpumask for
> vcpu-pin"):
>> attached patch solves a problem with vcpu-pinning and hot-plug of cpus:
>
> Thanks. This is a mixed tools/hypervisor patch. We've discussed it
> and it seems good to me. Keir, would you care to apply it, or would
> you like it to go through the tools tree ?
>
> Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
Actually, now I look at it, I have to NACK the patch. Sorry I didn't look
closely enough earlier. I think the bug can be addressed without any
hypervisor changes: when vcpu-pinning, the tools can quite correctly pass a
cpumask to Xen just big enough to express just the CPUs in the new affinity
set. If the resulting mask is too narrow to address all CPUs in the system,
then Xen will pad it out with zeroes. If the resulting mask is too wide, Xen
will simply truncate it. All this is done silently at the time of the
setvcpuaffinity hypercall. Hence, Juergen's hypervisor changes are really
unnecessary, and a neater tools fix could probably be arrived at without the
hypervisor changes.
-- Keir
> Ian.
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xensource.com
> http://lists.xensource.com/xen-devel
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [Patch] use full-size cpumask for vcpu-pin
2010-08-13 12:58 ` Keir Fraser
@ 2010-08-13 13:11 ` Ian Jackson
2010-08-13 13:15 ` Keir Fraser
0 siblings, 1 reply; 13+ messages in thread
From: Ian Jackson @ 2010-08-13 13:11 UTC (permalink / raw)
To: Keir Fraser; +Cc: Juergen Gross, xen-devel@lists.xensource.com
Keir Fraser writes ("Re: [Xen-devel] [Patch] use full-size cpumask for vcpu-pin"):
> Actually, now I look at it, I have to NACK the patch. Sorry I didn't look
> closely enough earlier. I think the bug can be addressed without any
> hypervisor changes: when vcpu-pinning, the tools can quite correctly pass a
> cpumask to Xen just big enough to express just the CPUs in the new affinity
> set. If the resulting mask is too narrow to address all CPUs in the system,
> then Xen will pad it out with zeroes. If the resulting mask is too wide, Xen
> will simply truncate it. All this is done silently at the time of the
> setvcpuaffinity hypercall.
The difficulty is, as I understand it, how to express the mask "all
CPUs present and future". Since the number of pcpus may be very
large. The tools presumably shouldn't pass a kilobyte of 0xff :-).
Ian.
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [Patch] use full-size cpumask for vcpu-pin
2010-08-13 13:11 ` Ian Jackson
@ 2010-08-13 13:15 ` Keir Fraser
2010-08-13 13:21 ` Keir Fraser
0 siblings, 1 reply; 13+ messages in thread
From: Keir Fraser @ 2010-08-13 13:15 UTC (permalink / raw)
To: Ian Jackson; +Cc: Juergen Gross, xen-devel@lists.xensource.com
On 13/08/2010 14:11, "Ian Jackson" <Ian.Jackson@eu.citrix.com> wrote:
> Keir Fraser writes ("Re: [Xen-devel] [Patch] use full-size cpumask for
> vcpu-pin"):
>> Actually, now I look at it, I have to NACK the patch. Sorry I didn't look
>> closely enough earlier. I think the bug can be addressed without any
>> hypervisor changes: when vcpu-pinning, the tools can quite correctly pass a
>> cpumask to Xen just big enough to express just the CPUs in the new affinity
>> set. If the resulting mask is too narrow to address all CPUs in the system,
>> then Xen will pad it out with zeroes. If the resulting mask is too wide, Xen
>> will simply truncate it. All this is done silently at the time of the
>> setvcpuaffinity hypercall.
>
> The difficulty is, as I understand it, how to express the mask "all
> CPUs present and future". Since the number of pcpus may be very
> large. The tools presumably shouldn't pass a kilobyte of 0xff :-).
Since Xen discards any CPUs which are not online at the time of the
setvcpuaffinity hypercall, expressing "all CPUs present and future" is a bit
pointless.
-- Keir
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [Patch] use full-size cpumask for vcpu-pin
2010-08-13 13:15 ` Keir Fraser
@ 2010-08-13 13:21 ` Keir Fraser
2010-08-13 13:25 ` Keir Fraser
0 siblings, 1 reply; 13+ messages in thread
From: Keir Fraser @ 2010-08-13 13:21 UTC (permalink / raw)
To: Ian Jackson; +Cc: Juergen Gross, xen-devel@lists.xensource.com
On 13/08/2010 14:15, "Keir Fraser" <keir.fraser@eu.citrix.com> wrote:
> On 13/08/2010 14:11, "Ian Jackson" <Ian.Jackson@eu.citrix.com> wrote:
>
>> The difficulty is, as I understand it, how to express the mask "all
>> CPUs present and future". Since the number of pcpus may be very
>> large. The tools presumably shouldn't pass a kilobyte of 0xff :-).
>
> Since Xen discards any CPUs which are not online at the time of the
> setvcpuaffinity hypercall, expressing "all CPUs present and future" is a bit
> pointless.
Hm, no, I'm talking rubbish. Argh. Okay, his patch is fine then. :-D
You can check it in to xen-unstable-tools.hg.
-- Keir
> -- Keir
>
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [Patch] use full-size cpumask for vcpu-pin
2010-08-13 13:21 ` Keir Fraser
@ 2010-08-13 13:25 ` Keir Fraser
2010-08-13 13:29 ` Ian Jackson
2010-08-13 13:30 ` Keir Fraser
0 siblings, 2 replies; 13+ messages in thread
From: Keir Fraser @ 2010-08-13 13:25 UTC (permalink / raw)
To: Ian Jackson; +Cc: Juergen Gross, xen-devel@lists.xensource.com
On 13/08/2010 14:21, "Keir Fraser" <keir.fraser@eu.citrix.com> wrote:
>> Since Xen discards any CPUs which are not online at the time of the
>> setvcpuaffinity hypercall, expressing "all CPUs present and future" is a bit
>> pointless.
>
> Hm, no, I'm talking rubbish. Argh. Okay, his patch is fine then. :-D
>
> You can check it in to xen-unstable-tools.hg.
One suggestion: that we rename the syctl.physinfo field 'max_phys_cpus' to
'max_possible_cpus'. The 'phys' is kind of redundant since this is the
physinfo sysctl, and 'possible' provides a better hint that this fields
indicates maximum possible supported CPUs now and forever on this boot of
the system.
-- Keir
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [Patch] use full-size cpumask for vcpu-pin
2010-08-13 13:25 ` Keir Fraser
@ 2010-08-13 13:29 ` Ian Jackson
2010-08-13 13:30 ` Keir Fraser
1 sibling, 0 replies; 13+ messages in thread
From: Ian Jackson @ 2010-08-13 13:29 UTC (permalink / raw)
To: Keir Fraser; +Cc: Juergen Gross, xen-devel@lists.xensource.com
Keir Fraser writes ("Re: [Xen-devel] [Patch] use full-size cpumask for vcpu-pin"):
> On 13/08/2010 14:21, "Keir Fraser" <keir.fraser@eu.citrix.com> wrote:
> > Hm, no, I'm talking rubbish. Argh. Okay, his patch is fine then. :-D
> > You can check it in to xen-unstable-tools.hg.
:-).
> One suggestion: that we rename the syctl.physinfo field 'max_phys_cpus' to
> 'max_possible_cpus'. The 'phys' is kind of redundant since this is the
> physinfo sysctl, and 'possible' provides a better hint that this fields
> indicates maximum possible supported CPUs now and forever on this boot of
> the system.
Right, willdo.
Ian.
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [Patch] use full-size cpumask for vcpu-pin
2010-08-13 13:25 ` Keir Fraser
2010-08-13 13:29 ` Ian Jackson
@ 2010-08-13 13:30 ` Keir Fraser
2010-08-13 14:09 ` Keir Fraser
1 sibling, 1 reply; 13+ messages in thread
From: Keir Fraser @ 2010-08-13 13:30 UTC (permalink / raw)
To: Ian Jackson; +Cc: Juergen Gross, xen-devel@lists.xensource.com
On 13/08/2010 14:25, "Keir Fraser" <keir.fraser@eu.citrix.com> wrote:
> On 13/08/2010 14:21, "Keir Fraser" <keir.fraser@eu.citrix.com> wrote:
>
>>> Since Xen discards any CPUs which are not online at the time of the
>>> setvcpuaffinity hypercall, expressing "all CPUs present and future" is a bit
>>> pointless.
>>
>> Hm, no, I'm talking rubbish. Argh. Okay, his patch is fine then. :-D
>>
>> You can check it in to xen-unstable-tools.hg.
>
> One suggestion: that we rename the syctl.physinfo field 'max_phys_cpus' to
> 'max_possible_cpus'. The 'phys' is kind of redundant since this is the
> physinfo sysctl, and 'possible' provides a better hint that this fields
> indicates maximum possible supported CPUs now and forever on this boot of
> the system.
Even better, let's not introduce a new field at all, and let's always set
sysctl.phys_info.max_cpu_id to NR_CPUS-1. Then I don't think any tools
changes are needed.
Yeah, that's what we should do. I will make a patch for that.
-- Keir
> -- Keir
>
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [Patch] use full-size cpumask for vcpu-pin
2010-08-13 13:30 ` Keir Fraser
@ 2010-08-13 14:09 ` Keir Fraser
0 siblings, 0 replies; 13+ messages in thread
From: Keir Fraser @ 2010-08-13 14:09 UTC (permalink / raw)
To: Ian Jackson; +Cc: Juergen Gross, xen-devel@lists.xensource.com
On 13/08/2010 14:30, "Keir Fraser" <keir.fraser@eu.citrix.com> wrote:
>> One suggestion: that we rename the syctl.physinfo field 'max_phys_cpus' to
>> 'max_possible_cpus'. The 'phys' is kind of redundant since this is the
>> physinfo sysctl, and 'possible' provides a better hint that this fields
>> indicates maximum possible supported CPUs now and forever on this boot of
>> the system.
>
> Even better, let's not introduce a new field at all, and let's always set
> sysctl.phys_info.max_cpu_id to NR_CPUS-1. Then I don't think any tools
> changes are needed.
>
> Yeah, that's what we should do. I will make a patch for that.
Done as xen-unstable:f6e1a597a92f. At this late stage it's probably a bit
too subtle for backport to 4.0.1 unfortunately (I hate to wonder if there
could be subtle side effects on xend's usage of these max_*_id fields).
-- Keir
> -- Keir
>
>> -- Keir
>>
>
^ permalink raw reply [flat|nested] 13+ messages in thread
end of thread, other threads:[~2010-08-13 14:09 UTC | newest]
Thread overview: 13+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2010-08-02 6:40 [Patch] use full-size cpumask for vcpu-pin Juergen Gross
2010-08-10 14:11 ` Ian Jackson
2010-08-10 14:27 ` Keir Fraser
2010-08-11 4:32 ` Juergen Gross
2010-08-13 12:31 ` Ian Jackson
2010-08-13 12:58 ` Keir Fraser
2010-08-13 13:11 ` Ian Jackson
2010-08-13 13:15 ` Keir Fraser
2010-08-13 13:21 ` Keir Fraser
2010-08-13 13:25 ` Keir Fraser
2010-08-13 13:29 ` Ian Jackson
2010-08-13 13:30 ` Keir Fraser
2010-08-13 14:09 ` Keir Fraser
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).