From mboxrd@z Thu Jan 1 00:00:00 1970 From: Andrew Cooper Subject: Re: [PATCH v15 01/11] multicall: add no preemption ability between two calls Date: Wed, 10 Sep 2014 12:12:07 +0100 Message-ID: <54103207.5080004@citrix.com> References: <1409906249-6057-1-git-send-email-chao.p.peng@linux.intel.com> <1409906249-6057-2-git-send-email-chao.p.peng@linux.intel.com> <5409947C.3020402@citrix.com> <20140909064359.GF15872@pengc-linux> <540EF4EC0200007800032903@mail.emea.novell.com> <540EDB9B.6080908@citrix.com> <540F05ED020000780003299E@mail.emea.novell.com> <540EF624.4020200@citrix.com> <540F19AF0200007800032AD1@mail.emea.novell.com> <20140910013232.GG15872@pengc-linux> <54101D29.8010606@citrix.com> <54103F01020000780003328F@mail.emea.novell.com> <541024AF.8050404@citrix.com> <541043330200007800033304@mail.emea.novell.com> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Return-path: In-Reply-To: <541043330200007800033304@mail.emea.novell.com> List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Sender: xen-devel-bounces@lists.xen.org Errors-To: xen-devel-bounces@lists.xen.org To: Jan Beulich Cc: keir@xen.org, Ian.Campbell@citrix.com, stefano.stabellini@eu.citrix.com, George.Dunlap@eu.citrix.com, Ian.Jackson@eu.citrix.com, xen-devel@lists.xen.org, Chao Peng , dgdegra@tycho.nsa.gov List-Id: xen-devel@lists.xenproject.org On 10/09/14 11:25, Jan Beulich wrote: >>>> On 10.09.14 at 12:15, wrote: >> On 10/09/14 11:07, Jan Beulich wrote: >>>>>> On 10.09.14 at 11:43, wrote: >>>> Actually, on further thought, using multicalls like this cannot possibly >>>> be correct from a functional point of view. >>>> >>>> Even with the no preempt flag between a wrmsr/rdmsr hypercall pair, >>>> there is no guarantee that accesses to remote cpus msrs won't interleave >>>> with a different natural access, clobbering the results of the wrmsr. >>>> >>>> However this is solved, the wrmsr/rdmsr pair *must* be part of the same >>>> synchronous thread of execution on the appropriate cpu. You can trust >>>> that interrupts won't play with these msrs, but you absolutely can't >>>> guarantee that IPI/wrmsr/IPI/rdmsr will work. >>> Not sure I follow, particularly in the context of the white listing of >>> MSRs permitted here (which ought to not include anything the >>> hypervisor needs control over). >> Consider two dom0 vcpus both using this new multicall mechanism to read >> QoS information for different domains, which end up both targeting the >> same remote cpu. They will both end up using IPI/wrmsr/IPI/rdmsr, which >> may interleave and clobber the first wrmsr. > But that situation doesn't result from the multicall use here - it would > equally be the case for an inherently batchable hypercall. Indeed - I called out multicall because of the current implementation, but I should have been more clear. > To deal with > that we'd need a wrmsr-then-rdmsr operation, or move the entire > execution of the batch onto the target CPU. Since the former would > quickly become unwieldy for more complex operations, I think this > gets us back to aiming at using continue_hypercall_on_cpu() here. Which gets us back to the problem that you cannot use copy_{to,from}_guest() after continue_hypercall_on_cpu(), due to being in the wrong context. I think this requires a step back and rethink. I can't offhand think of any combination of existing bits of infrastructure which will allow this to work correctly, which means something new needs designing. ~Andrew