From mboxrd@z Thu Jan 1 00:00:00 1970 From: Chao Peng Subject: Re: [PATCH v15 01/11] multicall: add no preemption ability between two calls Date: Wed, 17 Sep 2014 17:22:37 +0800 Message-ID: <20140917092237.GA15318@pengc-linux> References: <540F05ED020000780003299E@mail.emea.novell.com> <540EF624.4020200@citrix.com> <540F19AF0200007800032AD1@mail.emea.novell.com> <20140910013232.GG15872@pengc-linux> <54101D29.8010606@citrix.com> <54103F01020000780003328F@mail.emea.novell.com> <541024AF.8050404@citrix.com> <541043330200007800033304@mail.emea.novell.com> <54103207.5080004@citrix.com> <20140912025543.GI15872@pengc-linux> Reply-To: Chao Peng Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Return-path: Content-Disposition: inline In-Reply-To: <20140912025543.GI15872@pengc-linux> List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Sender: xen-devel-bounces@lists.xen.org Errors-To: xen-devel-bounces@lists.xen.org To: Andrew Cooper , Jan Beulich Cc: keir@xen.org, Ian.Campbell@citrix.com, stefano.stabellini@eu.citrix.com, George.Dunlap@eu.citrix.com, Ian.Jackson@eu.citrix.com, xen-devel@lists.xen.org, dgdegra@tycho.nsa.gov List-Id: xen-devel@lists.xenproject.org On Fri, Sep 12, 2014 at 10:55:43AM +0800, Chao Peng wrote: > On Wed, Sep 10, 2014 at 12:12:07PM +0100, Andrew Cooper wrote: > > On 10/09/14 11:25, Jan Beulich wrote: > > >>>> On 10.09.14 at 12:15, wrote: > > >> On 10/09/14 11:07, Jan Beulich wrote: > > >>>>>> On 10.09.14 at 11:43, wrote: > > >>>> Actually, on further thought, using multicalls like this cannot possibly > > >>>> be correct from a functional point of view. > > >>>> > > >>>> Even with the no preempt flag between a wrmsr/rdmsr hypercall pair, > > >>>> there is no guarantee that accesses to remote cpus msrs won't interleave > > >>>> with a different natural access, clobbering the results of the wrmsr. > > >>>> > > >>>> However this is solved, the wrmsr/rdmsr pair *must* be part of the same > > >>>> synchronous thread of execution on the appropriate cpu. You can trust > > >>>> that interrupts won't play with these msrs, but you absolutely can't > > >>>> guarantee that IPI/wrmsr/IPI/rdmsr will work. > > >>> Not sure I follow, particularly in the context of the white listing of > > >>> MSRs permitted here (which ought to not include anything the > > >>> hypervisor needs control over). > > >> Consider two dom0 vcpus both using this new multicall mechanism to read > > >> QoS information for different domains, which end up both targeting the > > >> same remote cpu. They will both end up using IPI/wrmsr/IPI/rdmsr, which > > >> may interleave and clobber the first wrmsr. > > > But that situation doesn't result from the multicall use here - it would > > > equally be the case for an inherently batchable hypercall. > > > > Indeed - I called out multicall because of the current implementation, > > but I should have been more clear. > > > > > To deal with > > > that we'd need a wrmsr-then-rdmsr operation, or move the entire > > > execution of the batch onto the target CPU. Since the former would > > > quickly become unwieldy for more complex operations, I think this > > > gets us back to aiming at using continue_hypercall_on_cpu() here. > > > > Which gets us back to the problem that you cannot use > > copy_{to,from}_guest() after continue_hypercall_on_cpu(), due to being > > in the wrong context. > > > > > > I think this requires a step back and rethink. I can't offhand think of > > any combination of existing bits of infrastructure which will allow this > > to work correctly, which means something new needs designing. > > > How about this: > > 1) Still do the batch in do_platform_op() but add a iteration field in > the interface structure. > > 2) Still use on_selected_cpus() but group the adjacent resource_ops > which have a same cpu and NO_PREEMPT set into one and do it as a whole > in the new cpu context. > Any suggestion for this? > Chao > > > > > > > _______________________________________________ > > Xen-devel mailing list > > Xen-devel@lists.xen.org > > http://lists.xen.org/xen-devel > > _______________________________________________ > Xen-devel mailing list > Xen-devel@lists.xen.org > http://lists.xen.org/xen-devel