From mboxrd@z Thu Jan 1 00:00:00 1970 From: Jan Kiszka Subject: Re: High vm-exit latencies during kvm boot-up/shutdown Date: Fri, 26 Oct 2007 11:03:05 +0200 Message-ID: <4721AD49.7010405@siemens.com> References: <471D2D8C.1080202@web.de> <471D96DC.7070809@web.de> <10EA09EFD8728347A513008B6B0DA77A024414D9@pdsmsx411.ccr.corp.intel.com> <471DBA1A.2080108@web.de> <471DC311.2050003@qumranet.com> <471DF76B.7040001@siemens.com> <471E02F7.6080408@qumranet.com> <471E0818.6060405@siemens.com> <471E1290.2000208@qumranet.com> <471E1A77.90808@siemens.com> <471E1DCD.5040301@qumranet.com> <471E21D7.3000309@siemens.com> <471E238E.6040005@qumranet.com> <471E27B0.1090001@siemens.com> <471E29C5.1060807@qumranet.com> <471F0464.4000704@siemens.com> <471F07C0.8040104@qumranet.com> <471F0D57.3020209@siemens.com><471F116D.9080903@qumranet.com> <471F7143.8040406@siemens.com><471F7865.7070506@qumranet.com> <4720D76D.7070300@siemens.com> <4720DA42.6090300@qumranet.com> <10EA09EFD8728347A513008B6B0DA77A02482827@pdsmsx411.ccr.corp.intel.com> <4721A350.7030409@qumranet.com> <10EA09EFD8728347A513008B6B0DA77A02482B96@pdsmsx411.ccr.corp.intel.com> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Cc: kvm-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f@public.gmane.org, Avi Kivity To: "Dong, Eddie" Return-path: In-Reply-To: <10EA09EFD8728347A513008B6B0DA77A02482B96-wq7ZOvIWXbNpB2pF5aRoyrfspsVTdybXVpNB7YpNyf8@public.gmane.org> List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: kvm-devel-bounces-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f@public.gmane.org Errors-To: kvm-devel-bounces-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f@public.gmane.org List-Id: kvm.vger.kernel.org Dong, Eddie wrote: > Avi Kivity wrote: >> Dong, Eddie wrote: >>>> There's a two-liner required to make it work. I'll add it soon. >>>> >>>> >>> But you still needs to issue WBINVD to all pCPUs which just move >>> non-response time from one place to another, not? >>> >> You don't actually need to emulate wbinvd, you can just ignore it. >> >> The only reason I can think of to use wbinvd is if you're taking a cpu >> down (for a deep sleep state, or if you're shutting it off). A guest >> need not do that. >> >> Any other reason? dma? all dma today is cache-coherent, no? >> > For legacy PCI device, yes it is cache-cohetent, but for PCIe devices, > it is no longer a must. A PCIe device may not generate snoopy cycle > and thus require OS to flush cache. > > For example, a guest with direct device, say audio, can copy > data to dma buffer and then wbinvd to flush cache out and ask HW > DMA to output. So if you want the higher performance of PCIe you need performance-killing wbindv (not to speak of latency)? That sounds a bit contradictory to me. So this is also true for native PCIe usage? What really frightens me about wbinvd is that its latency "nicely" scales with the cache sizes. And I think my observed latency is far from being the worst case. In a different experiment, I once measured wbinvd latencies of a few milliseconds... :( Jan -- Siemens AG, Corporate Technology, CT SE 2 Corporate Competence Center Embedded Linux ------------------------------------------------------------------------- This SF.net email is sponsored by: Splunk Inc. Still grepping through log files to find problems? Stop. Now Search log events and configuration files using AJAX and a browser. Download your FREE copy of Splunk now >> http://get.splunk.com/