From mboxrd@z Thu Jan 1 00:00:00 1970 From: Dario Faggioli Subject: Re: Read Performance issue when Xen Hypervisor is activated Date: Thu, 12 Jan 2017 18:03:49 +0100 Message-ID: <1484240629.9947.18.camel@citrix.com> References: <64f352a7a8a14ec19da5d07374b48eb6@ip-projects.de> <1483115698.32021.102.camel@citrix.com> Mime-Version: 1.0 Content-Type: multipart/mixed; boundary="===============1901473932571825897==" Return-path: Received: from mail6.bemta5.messagelabs.com ([195.245.231.135]) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1cRinI-00011G-CM for xen-devel@lists.xenproject.org; Thu, 12 Jan 2017 17:04:08 +0000 In-Reply-To: List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xen.org Sender: "Xen-devel" To: Michael Schinzel , Xen-devel Cc: Roger Pau Monne , Thomas Toka List-Id: xen-devel@lists.xenproject.org --===============1901473932571825897== Content-Type: multipart/signed; micalg=pgp-sha256; protocol="application/pgp-signature"; boundary="=-leNo9t7X0hSUPIFe59r6" --=-leNo9t7X0hSUPIFe59r6 Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable On Mon, 2017-01-02 at 07:15 +0000, Michael Schinzel wrote: > Good Morning, > I'm back, although, as anticipate, I can't be terribly useful, I'm afraid... > You can see, in default Xen configuration, the most important thing > at read performance test -> 2414.92 MB/sec <- the used cache is half > of the cache like the same host is bootet without hypervisor. We now > searched and searched and searched and find the Case: > xen_acpi_processor >=20 > Xen is manageing the CPU Performance default with 1.200 Mhz. It is > like you are driving a Ferrari all the time with 30 miles/h :) So we > changed the Performance parameter to >=20 > =C2=A0xenpm set-scaling-governor all performance >=20 Well, yes, this will have an impact, but it's unlikely what you're looking for. In fact, something similar would apply also to baremetal Linux. > After a little bit searching around, i also find a parameter for the > scheduler. >=20 > root@v7:~# cat /sys/block/sda/queue/scheduler > noop deadline [cfq] >=20 > I changed the scheduler to deadline.=C2=A0=C2=A0After this Change >=20 Well, ISTR [nop] could be even better. But I don't think this will make much difference either, in this case. > We have already tried to remove the CPU reservation, memory limit and > so on but this don't change anythink. Also upgrading the Hypervisor > dont change anythink at this performance issue.=C2=A0 > =C2=A0 Well, these are all sequential benchmarks, so it indeed could have been expected that adding more vCPUs wouldn't have changed things much. I decided to re-run some of your tests on my test hardware (which is way lower end than yours, especially as far as storage is concerned). These are m results: hdparm -Tt /dev/sda Without Xen (baremetal Linux) = With Xen (from within dom0) Timing cached reads 14074 MB in 2.00 seconds =3D 7043.05 MB/sec = 14694 MB in 1.99 seconds =3D 7382.22 MB/sec Timing buffered disk reads 364 MB in 3.01 seconds =3D 120.78 MB/sec = 364 MB in 3.00 seconds =3D 121.22 MB/sec dd_obs_test.sh datei transfer rate block size Without Xen (baremetal Linux) With Xen (from within dom0) 512 279 MB/s 123 MB/s 1024 454 MB/s 217 MB/s 2048 275 MB/s 359 MB/s 4096 888 MB/s 532 MB/s 8192 987 MB/s 659 MB/s 16384 1.0 GB/s 685 MB/s 32768 1.1 GB/s 773 MB/s 65536 1.1 GB/s 846 MB/s 131072 1.1 GB/s 749 MB/s 262144 327 MB/s 844 MB/s 524288 1.1 GB/s 783 MB/s 1048576 420 MB/s 823 MB/s 2097152 485 MB/s 305 MB/s 4194304 409 MB/s 783 MB/s 8388608 380 MB/s 776 MB/s 16777216 950 MB/s 703 MB/s 33554432 916 MB/s 297 MB/s 67108864 856 MB/s 492 MB/s time dd if=3D/dev/zero of=3Ddatei bs=3D1M count=3D10240 Without Xen (baremetal Linux) With Xen (from within dom0) 73.7224 s, 146 MB/s 97.6948 s, 110 MB/s real 1m13.724s real 1m37.700s user 0m0.000s user 0m0.068s sys 0m9.364s sys 0m15.180s root@Zhaman:~# time dd if=3Ddatei of=3D/dev/null Without Xen (baremetal Linux) With Xen (from within dom0) 9.92787 s, 1.1 GB/s 95.1827 s, 113 MB/s real 0m9.953s real 1m35.194s user 0m2.096s user 0m10.632s sys 0m7.300s sys 0m51.820s Which confirms that, when running the tests inside a Xen Dom0, things are indeed slower. Let me say something, though: the purpose of Xen is not to achieve the best possible performance in Dom0. In fact, it is to achieve the best possible aggregated performance of a number of guest domains. The fact that virtualization has an overhead and that Dom0 pays quite a high price are well known. Have you tried, for instance, running some of the test in a DomU? Now, whether what both you and I are seeing is to be considered "normal", I can't tell. Maybe Roger can (or he can tell us who to bother for that). In general, I don't think updating random system and firmware components is useful at all... This is not a BIOS issue, IMO. Regards, Dario --=20 <> (Raistlin Majere) ----------------------------------------------------------------- Dario Faggioli, Ph.D, http://about.me/dario.faggioli Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK) --=-leNo9t7X0hSUPIFe59r6 Content-Type: application/pgp-signature; name="signature.asc" Content-Description: This is a digitally signed message part Content-Transfer-Encoding: 7bit -----BEGIN PGP SIGNATURE----- Version: GnuPG v2 iQIcBAABCAAGBQJYd7b1AAoJEBZCeImluHPu/zsP+wTHb2P1cfEuWRuMgXPBFx9Q I1krwKNPQvow1l3y/s4dGJoieCU4/KB9asU4KIMRg5NwXEuKKUqQhpCL7fhEjAt+ HIABhBlpqjAoaV+fPAgkDSgiw6I4itCaUAhUxEz+njmV916GxIyqDjH9K4VDLdxZ 2xlTsl78RlR165Otc7WcdDN1QbCn4Dm/sD2tIppTdedMoCATrBdCinvkHY1XJQk8 Z7ix5PngBE8aVXc7UenPdaD0K+YnXQzxdMIDaZsqWbR0mxRUOwo93sGOGfCppHdL nfONLCb+omrQeHbymN33xUgenzqGYa9cSPndfDPdEf8wUJwv+vbkbMiGEXOMMB7q 0KstXM3YSt9loi/vWulwoH6TtTJ3jtG2zDwULkh93yUfs/EW8QJdS9yWASqhXwMN 6kZAzYuE4fSlPqKvJO0gwjbPoR5bAZz0NckmwXeXn0zIrL5d2AYa4OLElX+p5jqA jM25y/CRvBGSwBXHF7XM2uivjJlS1tuJ+TxqEj/aNGE6o4SCoU1mp+roPlXkeLEX +G4mjWZhu6Wawu/32vD8bRweH4krgGOiLFzTG7DU7SvDlB9i7qnnSfuA36JnUk2p e3W9CpiIFSoK7iLN1HdBjZw8wIge7e2T680ZsHINE9lEqb7vFIozs+8RPgG1OlFI lSStCsx49yeIXiYDO8Fm =vDex -----END PGP SIGNATURE----- --=-leNo9t7X0hSUPIFe59r6-- --===============1901473932571825897== Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: base64 Content-Disposition: inline X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX18KWGVuLWRldmVs IG1haWxpbmcgbGlzdApYZW4tZGV2ZWxAbGlzdHMueGVuLm9yZwpodHRwczovL2xpc3RzLnhlbi5v cmcveGVuLWRldmVsCg== --===============1901473932571825897==--