From mboxrd@z Thu Jan 1 00:00:00 1970 From: Dario Faggioli Subject: Re: Notes on stubdoms and latency on ARM Date: Tue, 20 Jun 2017 12:11:58 +0200 Message-ID: <1497953518.7405.21.camel@citrix.com> References: <8c63069d-c909-e82c-ecba-5451f822a5cc@citrix.com> Mime-Version: 1.0 Content-Type: multipart/mixed; boundary="===============0709654616160315613==" Return-path: In-Reply-To: List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xen.org Sender: "Xen-devel" To: Volodymyr Babchuk , Stefano Stabellini Cc: Artem_Mygaiev@epam.com, Julien Grall , xen-devel@lists.xensource.com, Andrii Anisov , George Dunlap List-Id: xen-devel@lists.xenproject.org --===============0709654616160315613== Content-Type: multipart/signed; micalg=pgp-sha256; protocol="application/pgp-signature"; boundary="=-w2FcvxsESUJuVwJosljP" --=-w2FcvxsESUJuVwJosljP Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable On Mon, 2017-06-19 at 11:36 -0700, Volodymyr Babchuk wrote: > On 19 June 2017 at 10:54, Stefano Stabellini > wrote: > > True. However, Volodymyr took the time to demonstrate the > > performance of > > EL0 apps vs. stubdoms with a PoC, which is much more than most Xen > > contributors do. Nodoby provided numbers for a faster ARM context > > switch > > yet. I don't know on whom should fall the burden of proving that a > > lighter context switch can match the EL0 app numbers. I am not sure > > it > > would be fair to ask Volodymyr to do it. >=20 > Thanks. Actually, we discussed this topic internally today. Main > concern today is not a SMCs and OP-TEE (I will be happy to do this > right in XEN), but vcopros and GPU virtualization. Because of legal > issues, we can't put this in XEN. And because of vcpu framework > nature > we will need multiple calls to vgpu driver per one vcpu context > switch. > I'm going to create worst case scenario, where multiple vcpu are > active and there are no free pcpu, to see how credit or credit2 > scheduler will call my stubdom. > Well, that would be interesting and useful, thanks for offering doing that. Let's just keep in mind, though, that, if the numbers will turn out to be bad (and we manage to trace that back to being due to scheduling), then: 1) we can create a mechanism that bypasses the scheduler, 2) we can change the way stubdom are scheduled. Option 2) is something generic, would (most likely) benefit other use cases too, and we've said many times we'd be up for it... so let's please just not rule it out... :-) Regards, Dario --=20 <> (Raistlin Majere) ----------------------------------------------------------------- Dario Faggioli, Ph.D, http://about.me/dario.faggioli Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK) --=-w2FcvxsESUJuVwJosljP Content-Type: application/pgp-signature; name="signature.asc" Content-Description: This is a digitally signed message part Content-Transfer-Encoding: 7bit -----BEGIN PGP SIGNATURE----- Version: GnuPG v2 iQIcBAABCAAGBQJZSPTuAAoJEBZCeImluHPuGeYQANexboOdgWLUsS7RPbiCjIWX xCsDBtXnS9HV8ZFl2lTE7o1QjyALFfY/BH1siakgrLF/dYPse8w0hMlnim/ytOxg IgWP2QZqEpEmmE+Ds19enDhRcGE7JtTODHUrezWRHbXcCddaDP33fL0DJljvy2iw 6E//XivsbgBm15oZ2y98oTFvphN4/is425+5BIcZ8MfMirCpuTVCzK/Kbce5LjpO PaLlgj0eZRsJePjFuOVsjxzd8sCRfHXq0sJLYk+Ro0n666SwyHhUKaTa4btSjAr2 abHRhpL7eL5ac6a5racEsMfVtz2m+gN1I+DHjWrUZu4JlFCPaTIRQ++4q4gM5XzN 6VnsNb/GswISOm2mpRaiovl+L4Z5EvOj/NBGa1766o3vXnEUl2hYT+kTw9UiJljv NUZlSfporhv3tRqxrRGFsB0pAbCdSipdI/QO5bpqf7iTObMIpiAH3Ce7UL39KSp/ e13PDIL9ECaOnCB6HaNIdCPmY522YLGfFHA0jGuEcHQFQFRmT29+NRcLFuJPH/dB YnCU46+r7FkaWE2J+KGWWDz63o/4xKlWAD+pzmVcRDrmFjajykH5k190c1yNviU6 JRPw46XaslEy/hKS47CEttTa5+cWBsKxislWv20xN+oMPH3cHK8p+gWExPUPOFnJ 5ULksH2bhlsvv3SZoSJR =F57L -----END PGP SIGNATURE----- --=-w2FcvxsESUJuVwJosljP-- --===============0709654616160315613== Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: base64 Content-Disposition: inline X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX18KWGVuLWRldmVs IG1haWxpbmcgbGlzdApYZW4tZGV2ZWxAbGlzdHMueGVuLm9yZwpodHRwczovL2xpc3RzLnhlbi5v cmcveGVuLWRldmVsCg== --===============0709654616160315613==--