From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([208.118.235.92]:51978) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1SWavw-0003Cq-PJ for qemu-devel@nongnu.org; Mon, 21 May 2012 18:18:34 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1SWavu-0004T7-Vn for qemu-devel@nongnu.org; Mon, 21 May 2012 18:18:32 -0400 Received: from mail-ob0-f173.google.com ([209.85.214.173]:60187) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1SWavu-0004St-RE for qemu-devel@nongnu.org; Mon, 21 May 2012 18:18:30 -0400 Received: by obbwd20 with SMTP id wd20so10903259obb.4 for ; Mon, 21 May 2012 15:18:28 -0700 (PDT) Message-ID: <4FBABF2D.2020200@codemonkey.ws> Date: Mon, 21 May 2012 17:18:21 -0500 From: Anthony Liguori MIME-Version: 1.0 References: <1337213257.30558.22.camel@pasglop> <1337214293.30558.25.camel@pasglop> <4FB5F1FD.9020009@redhat.com> <1337329136.2513.7.camel@pasglop> <4FB60EFF.6070205@redhat.com> <1337379992.2513.17.camel@pasglop> <4FB74AB0.7090608@redhat.com> <1337549768.2458.0.camel@pasglop> <1337565405.2458.12.camel@pasglop> <4FB9F89A.90702@redhat.com> <20120521083132.GI4674@redhat.com> In-Reply-To: <20120521083132.GI4674@redhat.com> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Subject: Re: [Qemu-devel] [PATCH] Add a memory barrier to guest memory access functions List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: "Michael S. Tsirkin" Cc: Paolo Bonzini , David Gibson , qemu-devel@nongnu.org On 05/21/2012 03:31 AM, Michael S. Tsirkin wrote: > More than that. smp_mb is pretty expensive. You > often can do just smp_wmb and smp_rmb and that is > very cheap. > Many operations run in the vcpu context > or start when guest exits to host and work > is bounced from there and thus no barrier is needed > here. > > Example? start_xmit in e1000. Executed in vcpu context > so no barrier is needed. > > virtio of course is another example since it does its own > barriers. But even without that, virtio_blk_handle_output > runs in vcpu context. > > But more importantly, this hack just sweeps the > dirt under the carpet. Understanding the interaction > with guest drivers is important anyway. So But this isn't what this series is about. This series is only attempting to make sure that writes are ordered with respect to other writes in main memory. It's based on the assumption that write ordering is well defined (and typically strict) on most busses including PCI. I have not confirmed this myself but I trust that Ben has. So the only problem trying to be solved here is to make sure that if a write A is issued by the device model while it's on PCPU 0, if PCPU 1 does a write B to another location, and then the device model runs on PCPU 2 and does a read of both A and B, it will only see the new value of B if the it sees the new value of A. Whether the driver on VCPU 0 (which may be on any PCPU) also sees the write ordering is irrelevant. If you want to avoid taking a barrier on every write, we can make use of map() and issue explicit barriers (as virtio does). Regards, Anthony Liguori > I really don't see why don't we audit devices > and add proper barriers. >