From mboxrd@z Thu Jan 1 00:00:00 1970 From: Marcelo Tosatti Subject: Re: [patch 2/4] KVM: move coalesced_mmio locking to its own device Date: Wed, 20 May 2009 12:22:00 -0300 Message-ID: <20090520152200.GA14729@amt.cnet> References: <20090518165601.747763120@localhost.localdomain> <20090518170855.346048603@localhost.localdomain> <4A13F242.7090404@redhat.com> <20090520140956.GA3370@amt.cnet> <4A1413C3.4020606@redhat.com> <20090520151303.GA14582@amt.cnet> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Cc: kvm@vger.kernel.org To: Avi Kivity Return-path: Received: from mx2.redhat.com ([66.187.237.31]:36732 "EHLO mx2.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1755283AbZETPWi (ORCPT ); Wed, 20 May 2009 11:22:38 -0400 Received: from int-mx2.corp.redhat.com (int-mx2.corp.redhat.com [172.16.27.26]) by mx2.redhat.com (8.13.8/8.13.8) with ESMTP id n4KFMeND015808 for ; Wed, 20 May 2009 11:22:40 -0400 Content-Disposition: inline In-Reply-To: <20090520151303.GA14582@amt.cnet> Sender: kvm-owner@vger.kernel.org List-ID: On Wed, May 20, 2009 at 12:13:03PM -0300, Marcelo Tosatti wrote: > On Wed, May 20, 2009 at 05:29:23PM +0300, Avi Kivity wrote: > > Marcelo Tosatti wrote: > > > > > > > >>> So we have a function that takes a lock and conditionally releases it? > >>> > >> > >> Yes, but it is correct: it will only return with the lock held in case > >> it returns 1, in which case its guaranteed ->write will be called (which > >> will unlock it). > >> > >> It should check the range first and/or use some smarter synchronization, > >> but one thing at a time. > >> > > > > Yes it's correct but we'll get an endless stream of patches to 'fix' it > > because it is so unorthodox. > > Does it have to guarantee any kind of ordering in case of parallel > writes by distincting vcpus? This is what it does now (so if a vcpu > arrives first, the second will wait until the first is finished > processing). > > I suppose that is the responsability of the guest (if it does MMIO > writes to a device in parallel it'll screwup in real HW too). > > Because in such case, you can drop the mutex and protect only the kvm > data. If you want it to provide ordering (as in process the MMIO writes, by multiple vcpus, in the order they happen), you need a mutex or spinlock. And in that case, I don't see any other way around this given the way ->in_range / ->read/->write work. Even if you change the order in which the full mmio buffer check and coalesced-allowed-in-this-range happen you still need a spinlock/mutex in this unorthodox way.