From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mailman by lists.gnu.org with tmda-scanned (Exim 4.43) id 1N8Gpn-0000dW-RP for qemu-devel@nongnu.org; Wed, 11 Nov 2009 12:18:19 -0500 Received: from exim by lists.gnu.org with spam-scanned (Exim 4.43) id 1N8Gpi-0000ag-5S for qemu-devel@nongnu.org; Wed, 11 Nov 2009 12:18:18 -0500 Received: from [199.232.76.173] (port=42295 helo=monty-python.gnu.org) by lists.gnu.org with esmtp (Exim 4.43) id 1N8Gpi-0000ab-0Q for qemu-devel@nongnu.org; Wed, 11 Nov 2009 12:18:14 -0500 Received: from mail-pw0-f43.google.com ([209.85.160.43]:48271) by monty-python.gnu.org with esmtp (Exim 4.60) (envelope-from ) id 1N8Gph-0008UV-MS for qemu-devel@nongnu.org; Wed, 11 Nov 2009 12:18:13 -0500 Received: by pwi12 with SMTP id 12so855408pwi.2 for ; Wed, 11 Nov 2009 09:18:11 -0800 (PST) MIME-Version: 1.0 In-Reply-To: <20091111140819.GA29736@redhat.com> References: <20091026131715.GA25271@redhat.com> <200911111301.03427.paul@codesourcery.com> <20091111131253.GC23036@redhat.com> <200911111345.35249.paul@codesourcery.com> <20091111140819.GA29736@redhat.com> Date: Thu, 12 Nov 2009 01:18:11 +0800 Message-ID: Subject: Re: [Qemu-devel] Re: [PATCH] qemu/virtio: make wmb compiler barrier + comments From: Scott Tsai Content-Type: text/plain; charset=UTF-8 List-Id: qemu-devel.nongnu.org List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: "Michael S. Tsirkin" Cc: Paul Brook , qemu-devel@nongnu.org On Wed, Nov 11, 2009 at 10:08 PM, Michael S. Tsirkin wrote: > On Wed, Nov 11, 2009 at 01:45:35PM +0000, Paul Brook wrote: >> If you don't need real barriers, then why does the kvm code have them? > > We need real barriers but AFAIK kvm does not have them :( > IOW: virtio is currently broken with kvm, and my patch did > not fix this. The comment that I added says as much. How about just using GCC's __sync__synchronize atomic builtin (if detected as available by configure)? It's a full memory barrier instead of just a write barrier, for x86, it generates the same code as the current Linux mb() implementation: "mfence" on x86_64 "lock orl $0x0,(%esp)" on x86 unless -march is specified to a processor with "mfence". PPC could continue to use "eieio" while other architectures could just default to __sync_synchronize I do have a newbie question, when exactly would vrtio have to handle concurrent access from multiple threads? My current reading of the code suggests: 1. when CONFIG_IOTHREAD is true 2. when CONFIG_KVM is true and the guest machine has multiple CPUs