From mboxrd@z Thu Jan 1 00:00:00 1970 From: Avi Kivity Subject: Re: High vm-exit latencies during kvm boot-up/shutdown Date: Fri, 26 Oct 2007 11:58:51 +0200 Message-ID: <4721BA5B.2060002@qumranet.com> References: <471D2D8C.1080202@web.de> <471DBA1A.2080108@web.de> <471DC311.2050003@qumranet.com> <471DF76B.7040001@siemens.com> <471E02F7.6080408@qumranet.com> <471E0818.6060405@siemens.com> <471E1290.2000208@qumranet.com> <471E1A77.90808@siemens.com> <471E1DCD.5040301@qumranet.com> <471E21D7.3000309@siemens.com> <471E238E.6040005@qumranet.com> <471E27B0.1090001@siemens.com> <471E29C5.1060807@qumranet.com> <471F0464.4000704@siemens.com> <471F07C0.8040104@qumranet.com> <471F0D57.3020209@siemens.com><471F116D.9080903@qumranet.com> <471F7143.8040406@siemens.com><471F7865.7070506@qumranet.com> <4720D76D.7070300@siemens.com> <4720DA42.6090300@qumranet.com> <10EA09EFD8728347A513008B6B0DA77A02482827@pdsmsx411.ccr.corp.intel.com> <4721A350.7030409@qumranet.com> <10EA09EFD8728347A513008B6B0DA77A02482B96@pdsmsx411.ccr.corp.intel.com> <4721B0FD.9020006@qumranet.com> <10EA09EFD8728347A513008B6B0DA77A02482BFE@pdsmsx411.ccr.corp.intel.com> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Cc: kvm-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f@public.gmane.org, Jan Kiszka To: "Dong, Eddie" Return-path: In-Reply-To: <10EA09EFD8728347A513008B6B0DA77A02482BFE-wq7ZOvIWXbNpB2pF5aRoyrfspsVTdybXVpNB7YpNyf8@public.gmane.org> List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: kvm-devel-bounces-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f@public.gmane.org Errors-To: kvm-devel-bounces-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f@public.gmane.org List-Id: kvm.vger.kernel.org Dong, Eddie wrote: >>> >> Okay. In that case the host can emulate wbinvd by using the clflush >> instruction, which is much faster (although overall execution time may >> be higher), maintaining real-time response times. >> > > Faster? maybe. > The issue is clflush take va parameter. So KVM needs to map gpa first > and > then do flush. > > WIth this additional overhead. I am not sure which one is faster. But > yes, > this is the trend we may walk toward to reduce Deny of Service. > (flush host or other VM's cache will slowdown whole system). > The issue is not total time to execute (wbinvd is probably faster), but the time where interrupts are blocked. wbinvd can block interrupts for milliseconds, and if your industrial control machine needs service every 100 microsecond, something breaks. [Background: this is for running Linux with realtime extensions as host, with realtime processes on the host doing the control tasks and a guest doing the GUI and perhaps communications.] -- error compiling committee.c: too many arguments to function ------------------------------------------------------------------------- This SF.net email is sponsored by: Splunk Inc. Still grepping through log files to find problems? Stop. Now Search log events and configuration files using AJAX and a browser. Download your FREE copy of Splunk now >> http://get.splunk.com/