From: Keir Fraser <keir.fraser@eu.citrix.com>
To: Andreas Olsowski <andreas.olsowski@uni.leuphana.de>,
"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Subject: Re: slow live magration / xc_restore on xen4 pvops
Date: Thu, 10 Jun 2010 10:27:29 +0100 [thread overview]
Message-ID: <C8366E91.173BE%keir.fraser@eu.citrix.com> (raw)
In-Reply-To: <4C06E26E.3030404@uni.leuphana.de>
Andreas,
You can check whether this is fixed by the latest fixes in
http://xenbits.xensource.com/xen-4.0-testing.hg. You should only need to
rebuild and reinstall tools/libxc.
Thanks,
Keir
On 02/06/2010 23:59, "Andreas Olsowski" <andreas.olsowski@uni.leuphana.de>
wrote:
> I did some further research now and shut down all virtual machines on
> xenturio1, after that i got (3 runs):
> (xm save takes ~5 seconds , user and sys are always negligible so i
> removed those to reduce text)
>
> xenturio1:~# time xm restore /var/saverestore-x1.mem
> real 0m25.349s 0m27.456s 0m27.208s
>
> So the fact that there were running machines did impact performance of
> xc_restore.
>
> I proceeded to start create 20 "dummy" vms with 1gig ram and 4vpcus
> each (dom0 has 4096M fixed, 24gig total available):
> xenturio1:~# for i in {1..20} ; do echo creating dummy$i ; xt vm create
> dummy$i -vlan 27 -mem 1024 -cpus 4 ; done
> creating dummy1
> vm/create> successfully created vm 'dummy1'
> ....
> creating dummy20
> vm/create> successfully created vm 'dummy20'
>
> and started them
> for i in {1..20} ; do echo starting dummy$i ; xm start dummy$i ; done
>
> So my memory allocation should now be 100% (4gig dom0 20gig domUs), but
> why did i have 512megs to spare for "saverestore-x1"? Oh well, onwards.
>
> Once again i ran a save/restore, 3 times to be sure (edited the
> additional results in output).
>
> With 20 running vms:
> xenturio1:~# time xm restore /var/saverestore-x1.mem
> real 1m16.375s 0m31.306s 1m10.214s
>
> With 16 running vms:
> xenturio1:~# time xm restore /var/saverestore-x1.mem
> real 1m49.741s 1m38.696s 0m55.615s
>
> With 12 running vms:
> xenturio1:~# time xm restore /var/saverestore-x1.mem
> real 1m3.101s 2m4.254s 1m27.193s
>
> With 8 running vms:
> xenturio1:~# time xm restore /var/saverestore-x1.mem
> real 0m36.867s 0m43.513s 0m33.199s
>
> With 4 running vms:
> xenturio1:~# time xm restore /var/saverestore-x1.mem
> real 0m40.454s 0m44.929s 1m7.215s
>
> Keep in mind, those dumUs dont do anything at all, they just idle.
> What is going on there the results seem completely random, running more
> domUs can be faster than running less? How is that even possible?
>
> So i deleted the dummyXs and started the productive domUs again, in 3
> steps to take further measurements:
>
>
> after first batch:
> xenturio1:~# time xm restore /var/saverestore-x1.mem
> real 0m23.968s 1m22.133s 1m24.420s
>
> after second batch:
> xenturio1:~# time xm restore /var/saverestore-x1.mem
> real 1m54.310s 1m11.340s 1m47.643s
>
> after third batch:
> xenturio1:~# time xm restore /var/saverestore-x1.mem
> real 1m52.065s 1m34.517s 2m8.644s 1m25.473s 1m35.943s 1m45.074s
> 1m48.407s 1m18.277s 1m18.931s 1m27.458s
>
> So my current guess is, xc_restore speed depends on the amount of used
> memory or rather how much is beeing grabbed by running processes. Does
> that make any sense?
>
> But if that is so, explain:
> I started 3 vms running "stress" that do:
> load average: 30.94, 30.04, 21.00
> Mem: 5909844k total, 4020480k used, 1889364k free, 288k buffers
>
> But still:
> tarballerina:~# time xm restore /var/saverestore-t.mem
> real 0m38.654s
>
> Why doesnt xc_restore slow down on tarballerina, no matter what i do?
> Again: all 3 machines have 24gigs ram, 2x quad xeons and dom0 is fixed
> to 4096M ram.
> all use the same xen4 sources, the same kernels with the same configs.
>
> Is the Xeon E5520 with DDR3 really this much faster than the L5335 and
> L5410 with DDR2?
>
> If someone were to tell me, that this is expected behaviour i wouldnt
> like it, but at least i could accept it.
> Are machines doing plenty of cpu and memory utilizaton not a good
> measurement in this or any case?
>
> I think tomorrow night i will migrate all machines from xenturio1 to
> tarballerina, but first i have to verify that all vlans are available,
> that i cannot do right now.
>
> ---
>
> Andreas
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xensource.com
> http://lists.xensource.com/xen-devel
prev parent reply other threads:[~2010-06-10 9:27 UTC|newest]
Thread overview: 32+ messages / expand[flat|nested] mbox.gz Atom feed top
2010-06-01 17:49 XCP AkshayKumar Mehta
2010-06-01 19:06 ` XCP Jonathan Ludlam
2010-06-01 19:15 ` XCP AkshayKumar Mehta
2010-06-03 3:03 ` XCP AkshayKumar Mehta
2010-06-03 10:24 ` XCP Jonathan Ludlam
2010-06-03 17:20 ` XCP AkshayKumar Mehta
2010-08-31 1:33 ` XCP - iisues with XCP .5 AkshayKumar Mehta
2010-06-01 21:17 ` slow live magration / xc_restore on xen4 pvops Andreas Olsowski
2010-06-02 7:11 ` Keir Fraser
2010-06-02 15:46 ` Andreas Olsowski
2010-06-02 15:55 ` Keir Fraser
2010-06-02 16:18 ` Ian Jackson
2010-06-02 16:20 ` Ian Jackson
2010-06-02 16:24 ` Keir Fraser
2010-06-03 1:04 ` Brendan Cully
2010-06-03 4:31 ` Brendan Cully
2010-06-03 5:47 ` Keir Fraser
2010-06-03 6:45 ` Brendan Cully
2010-06-03 6:53 ` Jeremy Fitzhardinge
2010-06-03 6:55 ` Brendan Cully
2010-06-03 7:12 ` Keir Fraser
2010-06-03 8:58 ` Zhai, Edwin
2010-06-09 13:32 ` Keir Fraser
2010-06-02 16:27 ` Brendan Cully
2010-06-03 10:01 ` Ian Jackson
2010-06-03 15:03 ` Brendan Cully
2010-06-03 15:18 ` Keir Fraser
2010-06-03 17:15 ` Ian Jackson
2010-06-03 17:29 ` Brendan Cully
2010-06-03 18:02 ` Ian Jackson
2010-06-02 22:59 ` Andreas Olsowski
2010-06-10 9:27 ` Keir Fraser [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=C8366E91.173BE%keir.fraser@eu.citrix.com \
--to=keir.fraser@eu.citrix.com \
--cc=andreas.olsowski@uni.leuphana.de \
--cc=xen-devel@lists.xensource.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).