From mboxrd@z Thu Jan 1 00:00:00 1970 From: Jan Kiszka Subject: Re: [PATCH] KVM: VMX: Execute WBINVD to keep data consistency with assigned devices Date: Fri, 25 Jun 2010 10:54:19 +0200 Message-ID: <4C246EBB.1020909@siemens.com> References: <1277452623-24046-1-git-send-email-sheng@linux.intel.com> Mime-Version: 1.0 Content-Type: text/plain; charset=ISO-8859-15 Content-Transfer-Encoding: 7bit Cc: Avi Kivity , Marcelo Tosatti , kvm@vger.kernel.org, "Yaozu (Eddie) Dong" To: Sheng Yang Return-path: Received: from goliath.siemens.de ([192.35.17.28]:18384 "EHLO goliath.siemens.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751787Ab0FYIyg (ORCPT ); Fri, 25 Jun 2010 04:54:36 -0400 In-Reply-To: <1277452623-24046-1-git-send-email-sheng@linux.intel.com> Sender: kvm-owner@vger.kernel.org List-ID: Sheng Yang wrote: > Some guest device driver may leverage the "Non-Snoop" I/O, and explicitly > WBINVD or CLFLUSH to a RAM space. Since migration may occur before WBINVD or > CLFLUSH, we need to maintain data consistency either by: > 1: flushing cache (wbinvd) when the guest is scheduled out if there is no > wbinvd exit, or > 2: execute wbinvd on all dirty physical CPUs when guest wbinvd exits. > > For wbinvd VMExit capable processors, we issue IPIs to all physical CPUs to > do wbinvd, for we can't easily tell which physical CPUs are "dirty". wbinvd is a heavy weapon in the hands of a guest. Even if it is limited to pass-through scenarios, do we really need to bother all physical host CPUs with potential multi-millisecond stalls? Think of VMs only running on a subset of CPUs (e.g. to isolate latency sources). I would suggest to track the physical CPU usage of VCPUs between two wbinvd requests and only send the wbinvd IPI to that set. Also, I think the code is still too much vmx-focused. Only the trapping should be vendor specific, the rest generic. Jan -- Siemens AG, Corporate Technology, CT T DE IT 1 Corporate Competence Center Embedded Linux