From: Keir Fraser <keir.xen@gmail.com>
To: Zhigang Wang <zhigang.x.wang@oracle.com>,
Jan Beulich <jbeulich@suse.com>
Cc: Dan Magenheimer <dan.magenheimer@oracle.com>,
Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
xen-devel <xen-devel@lists.xen.org>
Subject: Re: VM memory allocation speed with cs 26056
Date: Mon, 12 Nov 2012 18:25:49 +0000 [thread overview]
Message-ID: <CCC6EFAD.44728%keir.xen@gmail.com> (raw)
In-Reply-To: <50A10F3E.2030808@oracle.com>
On 12/11/2012 15:01, "Zhigang Wang" <zhigang.x.wang@oracle.com> wrote:
> Hi Keir/Jan,
>
> Recently I got a chance to access a big machine (2T mem/160 cpus) and I tested
> your patch: http://xenbits.xen.org/hg/xen-unstable.hg/rev/177fdda0be56
>
> Attached is the result.
The PVM result is weird, there is a small-ish slowdown for small domains,
becoming a very large %age slowdown as domain memory increases, and then
turning into a *speedup* as the memory size gets very large indeed.
What are the error bars like on these measurements I wonder? One thing we
could do to allow PV guests doing 4k-at-a-time allocations through
alloc_heap_pages() to benefit from the TLB-flush improvements, is pull the
filtering-and-flush out into populate_physmap() and increase_reservation().
This is listed as a todo in the original patch (26056).
To be honest I don't know why the original patch would make PV domain
creation slower, and certainly not by a varying %age depending on domain
memory size!
-- Keir
> Test environment old:
>
> # xm info
> host : ovs-3f-9e-04
> release : 2.6.39-300.17.1.el5uek
> version : #1 SMP Fri Oct 19 11:30:08 PDT 2012
> machine : x86_64
> nr_cpus : 160
> nr_nodes : 8
> cores_per_socket : 10
> threads_per_core : 2
> cpu_mhz : 2394
> hw_caps :
> bfebfbff:2c100800:00000000:00003f40:02bee3ff:00000000:00000001:00000000
> virt_caps : hvm hvm_directio
> total_memory : 2097142
> free_memory : 2040108
> free_cpus : 0
> xen_major : 4
> xen_minor : 1
> xen_extra : .3OVM
> xen_caps : xen-3.0-x86_64 xen-3.0-x86_32p hvm-3.0-x86_32
> hvm-3.0-x86_32p hvm-3.0-x86_64
> xen_scheduler : credit
> xen_pagesize : 4096
> platform_params : virt_start=0xffff800000000000
> xen_changeset : unavailable
> xen_commandline : dom0_mem=31390M no-bootscrub
> cc_compiler : gcc version 4.1.2 20080704 (Red Hat 4.1.2-48)
> cc_compile_by : mockbuild
> cc_compile_domain : us.oracle.com
> cc_compile_date : Fri Oct 19 21:34:08 PDT 2012
> xend_config_format : 4
>
> # uname -a
> Linux ovs-3f-9e-04 2.6.39-300.17.1.el5uek #1 SMP Fri Oct 19 11:30:08 PDT
> 2012 x86_64 x86_64 x86_64 GNU/Linux
>
> # cat /boot/grub/grub.conf
> ...
> kernel /xen.gz dom0_mem=31390M no-bootscrub dom0_vcpus_pin
> dom0_max_vcpus=32
>
> Test environment new: old env + cs 26056
>
> Test script: test-vm-memory-allocation.sh (attached)
>
> My conclusion from the test:
>
> - HVM create time is greatly reduced.
> - PVM create time is increased dramatically for 4G, 8G, 16G, 32G, 64G, 128G.
> - HVM/PVM destroy time is not affected.
> - If most of our customers are using PVM, I think this patch is bad: because
> most VM memory should under 128G.
> - If they are using HVM, then this patch is great.
>
> Questions for discussion:
>
> - Did you get the same result?
> - It seems this result is not ideal. We may need to improve it.
>
> Please note: Imay not have access to the same machine for awhile.
>
> Thanks,
>
> Zhigang
>
next prev parent reply other threads:[~2012-11-12 18:25 UTC|newest]
Thread overview: 9+ messages / expand[flat|nested] mbox.gz Atom feed top
2012-11-12 15:01 VM memory allocation speed with cs 26056 Zhigang Wang
2012-11-12 15:17 ` Jan Beulich
2012-11-12 15:57 ` Zhigang Wang
2012-11-12 16:25 ` Dan Magenheimer
2012-11-12 18:25 ` Keir Fraser [this message]
2012-11-13 15:46 ` Zhigang Wang
2012-11-13 16:13 ` Dan Magenheimer
2012-11-13 17:17 ` Keir Fraser
2012-11-16 18:42 ` Zhigang Wang
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=CCC6EFAD.44728%keir.xen@gmail.com \
--to=keir.xen@gmail.com \
--cc=dan.magenheimer@oracle.com \
--cc=jbeulich@suse.com \
--cc=konrad.wilk@oracle.com \
--cc=xen-devel@lists.xen.org \
--cc=zhigang.x.wang@oracle.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).