From mboxrd@z Thu Jan 1 00:00:00 1970 From: Dor Laor Subject: Re: High vm-exit latencies during kvm boot-up/shutdown Date: Fri, 26 Oct 2007 11:37:15 +0200 Message-ID: <4721B54B.5080307@qumranet.com> References: <471D2D8C.1080202@web.de> <471DBA1A.2080108@web.de> <471DC311.2050003@qumranet.com> <471DF76B.7040001@siemens.com> <471E02F7.6080408@qumranet.com> <471E0818.6060405@siemens.com> <471E1290.2000208@qumranet.com> <471E1A77.90808@siemens.com> <471E1DCD.5040301@qumranet.com> <471E21D7.3000309@siemens.com> <471E238E.6040005@qumranet.com> <471E27B0.1090001@siemens.com> <471E29C5.1060807@qumranet.com> <471F0464.4000704@siemens.com> <471F07C0.8040104@qumranet.com> <471F0D57.3020209@siemens.com><471F116D.9080903@qumranet.com> <471F7143.8040406@siemens.com><471F7865.7070506@qumranet.com> <4720D76D.7070300@siemens.com><4720DA42.6090300@qumranet.com><10EA09EFD8728347A513008B6B0DA77A02482827@pdsmsx411.ccr.corp.intel.com><4721A350.7030409@qumranet.com><10EA09EFD8728347A513008B6B0DA77A02482B96 @pdsmsx411.ccr.corp.intel.com><4721AD49.7010405@siemens.com> <10EA09EFD8728347A513008B6B0DA77A02482BE4@pdsmsx411.ccr.corp.intel.com> Reply-To: dor.laor-atKUWr5tajBWk0Htik3J/w@public.gmane.org Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Cc: kvm-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f@public.gmane.org, Jan Kiszka , Avi Kivity To: "Dong, Eddie" Return-path: In-Reply-To: <10EA09EFD8728347A513008B6B0DA77A02482BE4-wq7ZOvIWXbNpB2pF5aRoyrfspsVTdybXVpNB7YpNyf8@public.gmane.org> List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: kvm-devel-bounces-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f@public.gmane.org Errors-To: kvm-devel-bounces-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f@public.gmane.org List-Id: kvm.vger.kernel.org Dong, Eddie wrote: > > Jan Kiszka wrote: > > > > So if you want the higher performance of PCIe you need > > performance-killing wbindv (not to speak of latency)? That sounds a > > bit contradictory to me. So this is also true for native PCIe usage? > > > > Mmm, I won't say so. When you want to get RT performance, you > won't use unknown OS such as Windows. If you use your RT > linux, the OS (guest) itself should not use wbinvd. > The idea is to have a RT-Linux on the host that runs RT jobs but also use it to run mgmt/other code on windows guests. This way you can better utilize an idle RT linux while keeping short latencies for the RT jobs. Dor. > > For the Bochs BIOS case, like Avi mentioned, we can simply remove it. > > > > What really frightens me about wbinvd is that its latency "nicely" > > scales with the cache sizes. And I think my observed latency > > is far from > > being the worst case. In a different experiment, I once measured > > wbinvd latencies of a few milliseconds... :( > > > I don't know details, but it could be. While RT usage can easily remove > this instruction. > > thx,eddie > > ------------------------------------------------------------------------- > This SF.net email is sponsored by: Splunk Inc. > Still grepping through log files to find problems? Stop. > Now Search log events and configuration files using AJAX and a browser. > Download your FREE copy of Splunk now >> http://get.splunk.com/ > _______________________________________________ > kvm-devel mailing list > kvm-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f@public.gmane.org > https://lists.sourceforge.net/lists/listinfo/kvm-devel > ------------------------------------------------------------------------- This SF.net email is sponsored by: Splunk Inc. Still grepping through log files to find problems? Stop. Now Search log events and configuration files using AJAX and a browser. Download your FREE copy of Splunk now >> http://get.splunk.com/