From: Zhigang Wang <zhigang.x.wang@oracle.com>
To: Jan Beulich <JBeulich@suse.com>
Cc: Dan Magenheimer <dan.magenheimer@oracle.com>,
Konrad Wilk <konrad.wilk@oracle.com>, Keir Fraser <keir@xen.org>,
xen-devel <xen-devel@lists.xen.org>
Subject: Re: VM memory allocation speed with cs 26056
Date: Mon, 12 Nov 2012 10:57:37 -0500 [thread overview]
Message-ID: <50A11C71.4020507@oracle.com> (raw)
In-Reply-To: <50A1210902000078000A7C86@nat28.tlf.novell.com>
[-- Attachment #1: Type: text/plain, Size: 1216 bytes --]
On 11/12/2012 10:17 AM, Jan Beulich wrote:
>>>> On 12.11.12 at 16:01, Zhigang Wang <zhigang.x.wang@oracle.com> wrote:
>> My conclusion from the test:
>>
>> - HVM create time is greatly reduced.
>> - PVM create time is increased dramatically for 4G, 8G, 16G, 32G, 64G, 128G.
>> - HVM/PVM destroy time is not affected.
>> - If most of our customers are using PVM, I think this patch is bad: because
>> most VM memory should under 128G.
>> - If they are using HVM, then this patch is great.
>>
>> Questions for discussion:
>>
>> - Did you get the same result?
>> - It seems this result is not ideal. We may need to improve it.
> We'd first of all need to understand how this rather odd behavior
> can be explained. In order to have a better comparison basis, did
> you also do this for traditional PV? Or maybe I misunderstand
> what PVM stands for, and am mixing it up with PVH? You certainly
> agree that the two curves for what you call PVM have quite
> unusual a relationship.
>
Let me attach the HVM and PV guest configure files.
Actually I use xm create -p to create the VM, and destroy it immediately. So the
guest kernel doesn't matter.
Please see the test script for detail.
Thanks,
Zhigang
[-- Attachment #2: pvm.cfg --]
[-- Type: text/plain, Size: 363 bytes --]
vif = []
disk = ['file:/OVS/Repositories/0004fb0000030000013b01f1d983f4db/VirtualDisks/0004fb00001200004b87f14bde146192.img,xvda,r']
uuid = '0004fb00-0006-0000-7a63-1bf13fa33c30'
on_reboot = 'restart'
memory = 1536000
bootloader = '/usr/bin/pygrub'
name = '0004fb00000600007a631bf13fa33c30'
vfb = ['type=vnc,vncunused=1,vnclisten=0.0.0.0,keymap=en-us']
vcpus = 1
[-- Attachment #3: hvm.cfg --]
[-- Type: text/plain, Size: 226 bytes --]
serial = 'pty'
disk = ['file:/share/vm/microcore_3.5.1.iso,hdc:cdrom,r']
boot = 'd'
memory = 4096
pae = 1
acpi = 1
apic = 1
vnc = 1
vncunused = 1
vnclisten = '0.0.0.0'
name = 'microcorelinux_x86_hvm'
builder = 'hvm'
vcpus = 1
[-- Attachment #4: Type: text/plain, Size: 126 bytes --]
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel
next prev parent reply other threads:[~2012-11-12 15:57 UTC|newest]
Thread overview: 9+ messages / expand[flat|nested] mbox.gz Atom feed top
2012-11-12 15:01 VM memory allocation speed with cs 26056 Zhigang Wang
2012-11-12 15:17 ` Jan Beulich
2012-11-12 15:57 ` Zhigang Wang [this message]
2012-11-12 16:25 ` Dan Magenheimer
2012-11-12 18:25 ` Keir Fraser
2012-11-13 15:46 ` Zhigang Wang
2012-11-13 16:13 ` Dan Magenheimer
2012-11-13 17:17 ` Keir Fraser
2012-11-16 18:42 ` Zhigang Wang
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=50A11C71.4020507@oracle.com \
--to=zhigang.x.wang@oracle.com \
--cc=JBeulich@suse.com \
--cc=dan.magenheimer@oracle.com \
--cc=keir@xen.org \
--cc=konrad.wilk@oracle.com \
--cc=xen-devel@lists.xen.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).