public inbox for kvm@vger.kernel.org
 help / color / mirror / Atom feed
From: Sheng Yang <sheng@linux.intel.com>
To: Avi Kivity <avi@redhat.com>
Cc: Jan Kiszka <jan.kiszka@siemens.com>,
	Marcelo Tosatti <mtosatti@redhat.com>,
	Joerg Roedel <joerg.roedel@amd.com>,
	kvm@vger.kernel.org, "Yaozu (Eddie) Dong" <eddie.dong@intel.com>
Subject: Re: [PATCH v3] KVM: VMX: Execute WBINVD to keep data consistency with assigned devices
Date: Mon, 28 Jun 2010 15:41:25 +0800	[thread overview]
Message-ID: <201006281541.25302.sheng@linux.intel.com> (raw)
In-Reply-To: <4C284A88.9000303@redhat.com>

On Monday 28 June 2010 15:08:56 Avi Kivity wrote:
> On 06/28/2010 09:56 AM, Sheng Yang wrote:
> > On Monday 28 June 2010 14:56:38 Avi Kivity wrote:
> >> On 06/28/2010 09:42 AM, Sheng Yang wrote:
> >>>>> +static void wbinvd_ipi(void *garbage)
> >>>>> +{
> >>>>> +	wbinvd();
> >>>>> +}
> >>>> 
> >>>> Like Jan mentioned, this is quite heavy.  What about a clflush() loop
> >>>> instead?  That may take more time, but at least it's preemptible.  Of
> >>>> course, it isn't preemptible in an IPI.
> >>> 
> >>> I think this kind of behavior happened rarely, and most recent
> >>> processor should have WBINVD exit which means it's an IPI... So I
> >>> think it's maybe acceptable here.
> >> 
> >> Several milliseconds of non-responsiveness may not be acceptable for
> >> some applications.  So I think queue_work_on() and a clflush loop is
> >> better than an IPI and wbinvd.
> > 
> > OK... Would update it in the next version.
> 
> Hm, the manual says (regarding clflush):
> > Invalidates the cache line that contains the linear address specified
> > with the source
> > operand from all levels of the processor cache hierarchy (data and
> > instruction). The
> > invalidation is broadcast throughout the cache coherence domain. If,
> > at any level of
> > the cache hierarchy, the line is inconsistent with memory (dirty) it
> > is written to
> > memory before invalidation.
> 
> So I don't think you need to queue_work_on(), instead you can work in
> vcpu thread context.  But better check with someone that really knows.

Yeah, I've just checked the instruction as well. For it would be boardcasted, 
seems we even don't need(and can't have) a dirty bitmap. So the overhead on the 
large machine should be big.

And I've calculated the times we need to execute clflush for whole guest memory. If 
I calculate it right, for a 64bit guest, clflush can only cover 64 bytes one time, 
so for a typical 4G guest, we would need to execute the command for 4G / 64 = 64M 
times. The cycles used by clflush can be vary, suppose it would use 10 cycles each 
(which sounds impossible, for involving boardcast and writeback, and not including 
cache refill time for all processors), it would cost more than 0.2 seconds one time 
on an 3.2Ghz machine...

--
regards
Yang, Sheng


--
regards
Yang, Sheng

  reply	other threads:[~2010-06-28  7:43 UTC|newest]

Thread overview: 32+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2010-06-28  3:36 [PATCH v3] KVM: VMX: Execute WBINVD to keep data consistency with assigned devices Sheng Yang
2010-06-28  3:56 ` Avi Kivity
2010-06-28  6:42   ` Sheng Yang
2010-06-28  6:56     ` Avi Kivity
2010-06-28  6:56       ` Sheng Yang
2010-06-28  7:08         ` Avi Kivity
2010-06-28  7:41           ` Sheng Yang [this message]
2010-06-28  8:07             ` Avi Kivity
2010-06-28  8:42               ` [PATCH v4] " Sheng Yang
2010-06-28  9:27                 ` Avi Kivity
2010-06-28  9:31                   ` Gleb Natapov
2010-06-28  9:35                     ` Avi Kivity
2010-06-29  3:16                       ` [PATCH v5] " Sheng Yang
2010-06-29  9:39                         ` Avi Kivity
2010-06-29 10:32                           ` Jan Kiszka
2010-06-29 10:42                             ` Avi Kivity
2010-06-29 12:32                               ` Roedel, Joerg
2010-06-29 12:37                                 ` Avi Kivity
2010-06-29 10:14                         ` Roedel, Joerg
2010-06-29 10:44                           ` Avi Kivity
2010-06-29 12:28                             ` Roedel, Joerg
2010-06-29 12:35                               ` Avi Kivity
2010-06-29 13:34                                 ` Roedel, Joerg
2010-06-29 13:25                         ` Marcelo Tosatti
2010-06-29 13:28                           ` Avi Kivity
2010-06-29 13:35                             ` Marcelo Tosatti
2010-06-29 13:50                               ` Avi Kivity
2010-06-29 14:31                                 ` Marcelo Tosatti
2010-06-28  7:30       ` [PATCH v3] " Dong, Eddie
2010-06-28  8:04         ` Avi Kivity
2010-06-28  8:16           ` Dong, Eddie
2010-06-28  8:45             ` Jan Kiszka

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=201006281541.25302.sheng@linux.intel.com \
    --to=sheng@linux.intel.com \
    --cc=avi@redhat.com \
    --cc=eddie.dong@intel.com \
    --cc=jan.kiszka@siemens.com \
    --cc=joerg.roedel@amd.com \
    --cc=kvm@vger.kernel.org \
    --cc=mtosatti@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox