* VM memory allocation speed with cs 26056
@ 2012-11-12 15:01 Zhigang Wang
2012-11-12 15:17 ` Jan Beulich
2012-11-12 18:25 ` Keir Fraser
0 siblings, 2 replies; 9+ messages in thread
From: Zhigang Wang @ 2012-11-12 15:01 UTC (permalink / raw)
To: Keir Fraser, Jan Beulich; +Cc: Dan Magenheimer, Konrad Wilk, xen-devel
[-- Attachment #1: Type: text/plain, Size: 2497 bytes --]
Hi Keir/Jan,
Recently I got a chance to access a big machine (2T mem/160 cpus) and I tested
your patch: http://xenbits.xen.org/hg/xen-unstable.hg/rev/177fdda0be56
Attached is the result.
Test environment old:
# xm info
host : ovs-3f-9e-04
release : 2.6.39-300.17.1.el5uek
version : #1 SMP Fri Oct 19 11:30:08 PDT 2012
machine : x86_64
nr_cpus : 160
nr_nodes : 8
cores_per_socket : 10
threads_per_core : 2
cpu_mhz : 2394
hw_caps :
bfebfbff:2c100800:00000000:00003f40:02bee3ff:00000000:00000001:00000000
virt_caps : hvm hvm_directio
total_memory : 2097142
free_memory : 2040108
free_cpus : 0
xen_major : 4
xen_minor : 1
xen_extra : .3OVM
xen_caps : xen-3.0-x86_64 xen-3.0-x86_32p hvm-3.0-x86_32
hvm-3.0-x86_32p hvm-3.0-x86_64
xen_scheduler : credit
xen_pagesize : 4096
platform_params : virt_start=0xffff800000000000
xen_changeset : unavailable
xen_commandline : dom0_mem=31390M no-bootscrub
cc_compiler : gcc version 4.1.2 20080704 (Red Hat 4.1.2-48)
cc_compile_by : mockbuild
cc_compile_domain : us.oracle.com
cc_compile_date : Fri Oct 19 21:34:08 PDT 2012
xend_config_format : 4
# uname -a
Linux ovs-3f-9e-04 2.6.39-300.17.1.el5uek #1 SMP Fri Oct 19 11:30:08 PDT
2012 x86_64 x86_64 x86_64 GNU/Linux
# cat /boot/grub/grub.conf
...
kernel /xen.gz dom0_mem=31390M no-bootscrub dom0_vcpus_pin dom0_max_vcpus=32
Test environment new: old env + cs 26056
Test script: test-vm-memory-allocation.sh (attached)
My conclusion from the test:
- HVM create time is greatly reduced.
- PVM create time is increased dramatically for 4G, 8G, 16G, 32G, 64G, 128G.
- HVM/PVM destroy time is not affected.
- If most of our customers are using PVM, I think this patch is bad: because
most VM memory should under 128G.
- If they are using HVM, then this patch is great.
Questions for discussion:
- Did you get the same result?
- It seems this result is not ideal. We may need to improve it.
Please note: Imay not have access to the same machine for awhile.
Thanks,
Zhigang
[-- Attachment #2: result.pdf --]
[-- Type: application/pdf, Size: 29421 bytes --]
[-- Attachment #3: result.csv --]
[-- Type: text/csv, Size: 846 bytes --]
memory_size_gb,hvm_old_create,hvm_new_create,pvm_old_create,pvm_new_create,hvm_old_destroy,hvm_new_destroy,pvm_old_destroy,pvm_new_destroy
1,1.159,1.093,3.222,4.034,1.024,1.185,1.020,1.040
2,1.316,0.888,4.408,5.443,1.327,1.961,1.979,1.990
4,1.382,0.962,4.622,8.367,3.633,2.858,2.893,2.901
8,1.440,1.057,4.969,14.272,4.035,5.305,5.337,6.668
16,1.539,1.264,5.607,25.758,12.862,12.828,12.865,12.886
32,1.754,1.665,7.066,48.729,25.302,19.945,19.978,19.987
64,2.188,2.499,9.752,97.022,28.790,39.350,50.048,50.118
128,10.949,4.090,15.234,189.261,57.128,78.240,78.440,78.432
256,418.667,10.275,452.320,375.961,114.851,156.496,198.798,156.391
512,430.449,14.831,494.971,752.235,355.738,353.989,354.502,382.305
1024,867.145,28.570,1004.788,818.385,626.623,747.762,751.686,752.019
1500,1716.501,38.601,1968.900,1599.078,1083.900,1006.805,1000.907,1006.042
[-- Attachment #4: test-vm-memory-allocation.sh --]
[-- Type: application/x-shellscript, Size: 405 bytes --]
[-- Attachment #5: Type: text/plain, Size: 126 bytes --]
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: VM memory allocation speed with cs 26056
2012-11-12 15:01 VM memory allocation speed with cs 26056 Zhigang Wang
@ 2012-11-12 15:17 ` Jan Beulich
2012-11-12 15:57 ` Zhigang Wang
2012-11-12 16:25 ` Dan Magenheimer
2012-11-12 18:25 ` Keir Fraser
1 sibling, 2 replies; 9+ messages in thread
From: Jan Beulich @ 2012-11-12 15:17 UTC (permalink / raw)
To: Zhigang Wang; +Cc: Dan Magenheimer, Konrad Wilk, Keir Fraser, xen-devel
>>> On 12.11.12 at 16:01, Zhigang Wang <zhigang.x.wang@oracle.com> wrote:
> My conclusion from the test:
>
> - HVM create time is greatly reduced.
> - PVM create time is increased dramatically for 4G, 8G, 16G, 32G, 64G, 128G.
> - HVM/PVM destroy time is not affected.
> - If most of our customers are using PVM, I think this patch is bad: because
> most VM memory should under 128G.
> - If they are using HVM, then this patch is great.
>
> Questions for discussion:
>
> - Did you get the same result?
> - It seems this result is not ideal. We may need to improve it.
We'd first of all need to understand how this rather odd behavior
can be explained. In order to have a better comparison basis, did
you also do this for traditional PV? Or maybe I misunderstand
what PVM stands for, and am mixing it up with PVH? You certainly
agree that the two curves for what you call PVM have quite
unusual a relationship.
Jan
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: VM memory allocation speed with cs 26056
2012-11-12 15:17 ` Jan Beulich
@ 2012-11-12 15:57 ` Zhigang Wang
2012-11-12 16:25 ` Dan Magenheimer
1 sibling, 0 replies; 9+ messages in thread
From: Zhigang Wang @ 2012-11-12 15:57 UTC (permalink / raw)
To: Jan Beulich; +Cc: Dan Magenheimer, Konrad Wilk, Keir Fraser, xen-devel
[-- Attachment #1: Type: text/plain, Size: 1216 bytes --]
On 11/12/2012 10:17 AM, Jan Beulich wrote:
>>>> On 12.11.12 at 16:01, Zhigang Wang <zhigang.x.wang@oracle.com> wrote:
>> My conclusion from the test:
>>
>> - HVM create time is greatly reduced.
>> - PVM create time is increased dramatically for 4G, 8G, 16G, 32G, 64G, 128G.
>> - HVM/PVM destroy time is not affected.
>> - If most of our customers are using PVM, I think this patch is bad: because
>> most VM memory should under 128G.
>> - If they are using HVM, then this patch is great.
>>
>> Questions for discussion:
>>
>> - Did you get the same result?
>> - It seems this result is not ideal. We may need to improve it.
> We'd first of all need to understand how this rather odd behavior
> can be explained. In order to have a better comparison basis, did
> you also do this for traditional PV? Or maybe I misunderstand
> what PVM stands for, and am mixing it up with PVH? You certainly
> agree that the two curves for what you call PVM have quite
> unusual a relationship.
>
Let me attach the HVM and PV guest configure files.
Actually I use xm create -p to create the VM, and destroy it immediately. So the
guest kernel doesn't matter.
Please see the test script for detail.
Thanks,
Zhigang
[-- Attachment #2: pvm.cfg --]
[-- Type: text/plain, Size: 363 bytes --]
vif = []
disk = ['file:/OVS/Repositories/0004fb0000030000013b01f1d983f4db/VirtualDisks/0004fb00001200004b87f14bde146192.img,xvda,r']
uuid = '0004fb00-0006-0000-7a63-1bf13fa33c30'
on_reboot = 'restart'
memory = 1536000
bootloader = '/usr/bin/pygrub'
name = '0004fb00000600007a631bf13fa33c30'
vfb = ['type=vnc,vncunused=1,vnclisten=0.0.0.0,keymap=en-us']
vcpus = 1
[-- Attachment #3: hvm.cfg --]
[-- Type: text/plain, Size: 226 bytes --]
serial = 'pty'
disk = ['file:/share/vm/microcore_3.5.1.iso,hdc:cdrom,r']
boot = 'd'
memory = 4096
pae = 1
acpi = 1
apic = 1
vnc = 1
vncunused = 1
vnclisten = '0.0.0.0'
name = 'microcorelinux_x86_hvm'
builder = 'hvm'
vcpus = 1
[-- Attachment #4: Type: text/plain, Size: 126 bytes --]
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: VM memory allocation speed with cs 26056
2012-11-12 15:17 ` Jan Beulich
2012-11-12 15:57 ` Zhigang Wang
@ 2012-11-12 16:25 ` Dan Magenheimer
1 sibling, 0 replies; 9+ messages in thread
From: Dan Magenheimer @ 2012-11-12 16:25 UTC (permalink / raw)
To: Jan Beulich, Zhigang Wang; +Cc: Konrad Wilk, Keir Fraser, xen-devel
> From: Jan Beulich [mailto:JBeulich@suse.com]
> Subject: Re: VM memory allocation speed with cs 26056
>
> >>> On 12.11.12 at 16:01, Zhigang Wang <zhigang.x.wang@oracle.com> wrote:
> > My conclusion from the test:
> >
> > - HVM create time is greatly reduced.
> > - PVM create time is increased dramatically for 4G, 8G, 16G, 32G, 64G, 128G.
> > - HVM/PVM destroy time is not affected.
> > - If most of our customers are using PVM, I think this patch is bad: because
> > most VM memory should under 128G.
> > - If they are using HVM, then this patch is great.
> >
> > Questions for discussion:
> >
> > - Did you get the same result?
> > - It seems this result is not ideal. We may need to improve it.
>
> We'd first of all need to understand how this rather odd behavior
> can be explained. In order to have a better comparison basis, did
> you also do this for traditional PV? Or maybe I misunderstand
> what PVM stands for, and am mixing it up with PVH? You certainly
> agree that the two curves for what you call PVM have quite
> unusual a relationship.
("PVM" is unfortunately often used within Oracle and means the
same as "PV". "PVM" == paravirtualized virtual machine.)
One significant difference is that a PV domain always allocates
memory one 4K page at a time and the patch improves allocation
performance only for larger-order allocations. A reasonable
hypothesis is that the patch reduces performance on long
sequences of 4K pages, though this doesn't explain the curve
of the PV_create measurements at 256G and above.
With a one-line* hypervisor patch in alloc_heap_pages, one can
change HVM allocation so that all larger allocations are rejected.
It would be very interesting to see if that would result in an HVM
create curve similar to the PV create curve.
* change "unlikely(order > MAX_ORDER)" to "order > 0"
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: VM memory allocation speed with cs 26056
2012-11-12 15:01 VM memory allocation speed with cs 26056 Zhigang Wang
2012-11-12 15:17 ` Jan Beulich
@ 2012-11-12 18:25 ` Keir Fraser
2012-11-13 15:46 ` Zhigang Wang
1 sibling, 1 reply; 9+ messages in thread
From: Keir Fraser @ 2012-11-12 18:25 UTC (permalink / raw)
To: Zhigang Wang, Jan Beulich
Cc: Dan Magenheimer, Konrad Rzeszutek Wilk, xen-devel
On 12/11/2012 15:01, "Zhigang Wang" <zhigang.x.wang@oracle.com> wrote:
> Hi Keir/Jan,
>
> Recently I got a chance to access a big machine (2T mem/160 cpus) and I tested
> your patch: http://xenbits.xen.org/hg/xen-unstable.hg/rev/177fdda0be56
>
> Attached is the result.
The PVM result is weird, there is a small-ish slowdown for small domains,
becoming a very large %age slowdown as domain memory increases, and then
turning into a *speedup* as the memory size gets very large indeed.
What are the error bars like on these measurements I wonder? One thing we
could do to allow PV guests doing 4k-at-a-time allocations through
alloc_heap_pages() to benefit from the TLB-flush improvements, is pull the
filtering-and-flush out into populate_physmap() and increase_reservation().
This is listed as a todo in the original patch (26056).
To be honest I don't know why the original patch would make PV domain
creation slower, and certainly not by a varying %age depending on domain
memory size!
-- Keir
> Test environment old:
>
> # xm info
> host : ovs-3f-9e-04
> release : 2.6.39-300.17.1.el5uek
> version : #1 SMP Fri Oct 19 11:30:08 PDT 2012
> machine : x86_64
> nr_cpus : 160
> nr_nodes : 8
> cores_per_socket : 10
> threads_per_core : 2
> cpu_mhz : 2394
> hw_caps :
> bfebfbff:2c100800:00000000:00003f40:02bee3ff:00000000:00000001:00000000
> virt_caps : hvm hvm_directio
> total_memory : 2097142
> free_memory : 2040108
> free_cpus : 0
> xen_major : 4
> xen_minor : 1
> xen_extra : .3OVM
> xen_caps : xen-3.0-x86_64 xen-3.0-x86_32p hvm-3.0-x86_32
> hvm-3.0-x86_32p hvm-3.0-x86_64
> xen_scheduler : credit
> xen_pagesize : 4096
> platform_params : virt_start=0xffff800000000000
> xen_changeset : unavailable
> xen_commandline : dom0_mem=31390M no-bootscrub
> cc_compiler : gcc version 4.1.2 20080704 (Red Hat 4.1.2-48)
> cc_compile_by : mockbuild
> cc_compile_domain : us.oracle.com
> cc_compile_date : Fri Oct 19 21:34:08 PDT 2012
> xend_config_format : 4
>
> # uname -a
> Linux ovs-3f-9e-04 2.6.39-300.17.1.el5uek #1 SMP Fri Oct 19 11:30:08 PDT
> 2012 x86_64 x86_64 x86_64 GNU/Linux
>
> # cat /boot/grub/grub.conf
> ...
> kernel /xen.gz dom0_mem=31390M no-bootscrub dom0_vcpus_pin
> dom0_max_vcpus=32
>
> Test environment new: old env + cs 26056
>
> Test script: test-vm-memory-allocation.sh (attached)
>
> My conclusion from the test:
>
> - HVM create time is greatly reduced.
> - PVM create time is increased dramatically for 4G, 8G, 16G, 32G, 64G, 128G.
> - HVM/PVM destroy time is not affected.
> - If most of our customers are using PVM, I think this patch is bad: because
> most VM memory should under 128G.
> - If they are using HVM, then this patch is great.
>
> Questions for discussion:
>
> - Did you get the same result?
> - It seems this result is not ideal. We may need to improve it.
>
> Please note: Imay not have access to the same machine for awhile.
>
> Thanks,
>
> Zhigang
>
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: VM memory allocation speed with cs 26056
2012-11-12 18:25 ` Keir Fraser
@ 2012-11-13 15:46 ` Zhigang Wang
2012-11-13 16:13 ` Dan Magenheimer
2012-11-16 18:42 ` Zhigang Wang
0 siblings, 2 replies; 9+ messages in thread
From: Zhigang Wang @ 2012-11-13 15:46 UTC (permalink / raw)
To: Keir Fraser
Cc: Dan Magenheimer, xen-devel, Jan Beulich, Konrad Rzeszutek Wilk
[-- Attachment #1: Type: text/plain, Size: 1471 bytes --]
On 11/12/2012 01:25 PM, Keir Fraser wrote:
> On 12/11/2012 15:01, "Zhigang Wang" <zhigang.x.wang@oracle.com> wrote:
>
>> Hi Keir/Jan,
>>
>> Recently I got a chance to access a big machine (2T mem/160 cpus) and I tested
>> your patch: http://xenbits.xen.org/hg/xen-unstable.hg/rev/177fdda0be56
>>
>> Attached is the result.
> The PVM result is weird, there is a small-ish slowdown for small domains,
> becoming a very large %age slowdown as domain memory increases, and then
> turning into a *speedup* as the memory size gets very large indeed.
>
> What are the error bars like on these measurements I wonder? One thing we
> could do to allow PV guests doing 4k-at-a-time allocations through
> alloc_heap_pages() to benefit from the TLB-flush improvements, is pull the
> filtering-and-flush out into populate_physmap() and increase_reservation().
> This is listed as a todo in the original patch (26056).
>
> To be honest I don't know why the original patch would make PV domain
> creation slower, and certainly not by a varying %age depending on domain
> memory size!
>
> -- Keir
I did it second time. It seems the result (attached) is promising.
I think the strange result is due to the order of testing:
start_physical_machine -> test_hvm -> test_pvm.
This time, I did: start_physical_machine -> test_pvm -> test_hvm.
You can see the pvm memory allocation speed is not affected by your patch this time.
So I believe this patch is excellent now.
Thanks,
Zhigang
[-- Attachment #2: result.pdf --]
[-- Type: application/pdf, Size: 15643 bytes --]
[-- Attachment #3: Type: text/plain, Size: 126 bytes --]
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: VM memory allocation speed with cs 26056
2012-11-13 15:46 ` Zhigang Wang
@ 2012-11-13 16:13 ` Dan Magenheimer
2012-11-13 17:17 ` Keir Fraser
2012-11-16 18:42 ` Zhigang Wang
1 sibling, 1 reply; 9+ messages in thread
From: Dan Magenheimer @ 2012-11-13 16:13 UTC (permalink / raw)
To: Zhigang Wang, Keir Fraser; +Cc: xen-devel, Jan Beulich, Konrad Wilk
> From: Zhigang Wang
> Subject: Re: [Xen-devel] VM memory allocation speed with cs 26056
>
> On 11/12/2012 01:25 PM, Keir Fraser wrote:
> > On 12/11/2012 15:01, "Zhigang Wang" <zhigang.x.wang@oracle.com> wrote:
> >
> >> Hi Keir/Jan,
> >>
> >> Recently I got a chance to access a big machine (2T mem/160 cpus) and I tested
> >> your patch: http://xenbits.xen.org/hg/xen-unstable.hg/rev/177fdda0be56
> >>
> >> Attached is the result.
> > The PVM result is weird, there is a small-ish slowdown for small domains,
> > becoming a very large %age slowdown as domain memory increases, and then
> > turning into a *speedup* as the memory size gets very large indeed.
> >
> > What are the error bars like on these measurements I wonder? One thing we
> > could do to allow PV guests doing 4k-at-a-time allocations through
> > alloc_heap_pages() to benefit from the TLB-flush improvements, is pull the
> > filtering-and-flush out into populate_physmap() and increase_reservation().
> > This is listed as a todo in the original patch (26056).
> >
> > To be honest I don't know why the original patch would make PV domain
> > creation slower, and certainly not by a varying %age depending on domain
> > memory size!
> >
> > -- Keir
> I did it second time. It seems the result (attached) is promising.
>
> I think the strange result is due to the order of testing:
> start_physical_machine -> test_hvm -> test_pvm.
>
> This time, I did: start_physical_machine -> test_pvm -> test_hvm.
>
> You can see the pvm memory allocation speed is not affected by your patch this time.
>
> So I believe this patch is excellent now.
I don't know about Keir's opinion, but to me the performance dependency
on ordering is even more bizarre and still calls into question the
acceptability of the patch. Customers aren't going to like being
told that, for best results, they should launch all PV domains before
their HVM domains.
Is scrubbing still/ever done lazily? That might explain some of
the weirdness.
Dan
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: VM memory allocation speed with cs 26056
2012-11-13 16:13 ` Dan Magenheimer
@ 2012-11-13 17:17 ` Keir Fraser
0 siblings, 0 replies; 9+ messages in thread
From: Keir Fraser @ 2012-11-13 17:17 UTC (permalink / raw)
To: Dan Magenheimer, Zhigang Wang
Cc: xen-devel, Jan Beulich, Konrad Rzeszutek Wilk
On 13/11/2012 16:13, "Dan Magenheimer" <dan.magenheimer@oracle.com> wrote:
>> I did it second time. It seems the result (attached) is promising.
>>
>> I think the strange result is due to the order of testing:
>> start_physical_machine -> test_hvm -> test_pvm.
>>
>> This time, I did: start_physical_machine -> test_pvm -> test_hvm.
>>
>> You can see the pvm memory allocation speed is not affected by your patch
>> this time.
>>
>> So I believe this patch is excellent now.
>
> I don't know about Keir's opinion, but to me the performance dependency
> on ordering is even more bizarre and still calls into question the
> acceptability of the patch. Customers aren't going to like being
> told that, for best results, they should launch all PV domains before
> their HVM domains.
Yes it's very odd. I would want to see more runs to be convinced that
nothing else weird is going on and affecting the results.
If it is something to do with the movement of the TLB-flush filtering logic,
it may just be a tiny difference being amplified by the vast number of times
alloc_heap_pages() gets called. We should be able to actually *improve* PV
build performance by pulling the flush filtering out into populate_physmap
and increase_reservation.
> Is scrubbing still/ever done lazily? That might explain some of
> the weirdness.
No lazy scrubbing any more.
-- Keir
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: VM memory allocation speed with cs 26056
2012-11-13 15:46 ` Zhigang Wang
2012-11-13 16:13 ` Dan Magenheimer
@ 2012-11-16 18:42 ` Zhigang Wang
1 sibling, 0 replies; 9+ messages in thread
From: Zhigang Wang @ 2012-11-16 18:42 UTC (permalink / raw)
To: Keir Fraser
Cc: Dan Magenheimer, Konrad Rzeszutek Wilk, Jan Beulich, xen-devel
[-- Attachment #1: Type: text/plain, Size: 1880 bytes --]
On 11/13/2012 10:46 AM, Zhigang Wang wrote:
> On 11/12/2012 01:25 PM, Keir Fraser wrote:
>> On 12/11/2012 15:01, "Zhigang Wang" <zhigang.x.wang@oracle.com> wrote:
>>
>>> Hi Keir/Jan,
>>>
>>> Recently I got a chance to access a big machine (2T mem/160 cpus) and I tested
>>> your patch: http://xenbits.xen.org/hg/xen-unstable.hg/rev/177fdda0be56
>>>
>>> Attached is the result.
>> The PVM result is weird, there is a small-ish slowdown for small domains,
>> becoming a very large %age slowdown as domain memory increases, and then
>> turning into a *speedup* as the memory size gets very large indeed.
>>
>> What are the error bars like on these measurements I wonder? One thing we
>> could do to allow PV guests doing 4k-at-a-time allocations through
>> alloc_heap_pages() to benefit from the TLB-flush improvements, is pull the
>> filtering-and-flush out into populate_physmap() and increase_reservation().
>> This is listed as a todo in the original patch (26056).
>>
>> To be honest I don't know why the original patch would make PV domain
>> creation slower, and certainly not by a varying %age depending on domain
>> memory size!
>>
>> -- Keir
> I did it second time. It seems the result (attached) is promising.
>
> I think the strange result is due to the order of testing:
> start_physical_machine -> test_hvm -> test_pvm.
>
> This time, I did: start_physical_machine -> test_pvm -> test_hvm.
>
> You can see the pvm memory allocation speed is not affected by your patch this time.
>
> So I believe this patch is excellent now.
I get another chance to run the test without (old) and with (new) cs 25056 with
order:
start_physical_machine -> test_hvm -> test_pvm
It seems PV guest memory allocation is not affected by this patch, although it
makes a big difference if testing with order:
start_physical_machine -> test_pvm -> test_hvm
Thanks for the patch.
Zhigang
[-- Attachment #2: result.pdf --]
[-- Type: application/pdf, Size: 15769 bytes --]
[-- Attachment #3: Type: text/plain, Size: 126 bytes --]
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel
^ permalink raw reply [flat|nested] 9+ messages in thread
end of thread, other threads:[~2012-11-16 18:42 UTC | newest]
Thread overview: 9+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2012-11-12 15:01 VM memory allocation speed with cs 26056 Zhigang Wang
2012-11-12 15:17 ` Jan Beulich
2012-11-12 15:57 ` Zhigang Wang
2012-11-12 16:25 ` Dan Magenheimer
2012-11-12 18:25 ` Keir Fraser
2012-11-13 15:46 ` Zhigang Wang
2012-11-13 16:13 ` Dan Magenheimer
2012-11-13 17:17 ` Keir Fraser
2012-11-16 18:42 ` Zhigang Wang
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).