From mboxrd@z Thu Jan 1 00:00:00 1970 From: Boris Ostrovsky Subject: Re: [PATCH v8 10/19] x86/VPMU: Interface for setting PMU mode and flags Date: Mon, 28 Jul 2014 13:13:57 -0400 Message-ID: <53D684D5.4090200@oracle.com> References: <1404225480-2664-1-git-send-email-boris.ostrovsky@oracle.com> <1404225480-2664-11-git-send-email-boris.ostrovsky@oracle.com> <53D686D60200007800026B96@mail.emea.novell.com> <53D67A67.5000002@oracle.com> <53D698360200007800026DF4@mail.emea.novell.com> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii"; Format="flowed" Content-Transfer-Encoding: 7bit Return-path: In-Reply-To: <53D698360200007800026DF4@mail.emea.novell.com> List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Sender: xen-devel-bounces@lists.xen.org Errors-To: xen-devel-bounces@lists.xen.org To: Jan Beulich Cc: kevin.tian@intel.com, keir@xen.org, suravee.suthikulpanit@amd.com, andrew.cooper3@citrix.com, tim@xen.org, dietmar.hahn@ts.fujitsu.com, xen-devel@lists.xen.org, jun.nakajima@intel.com List-Id: xen-devel@lists.xenproject.org On 07/28/2014 12:36 PM, Jan Beulich wrote: >>>> On 28.07.14 at 18:29, wrote: >> On 07/28/2014 11:22 AM, Jan Beulich wrote: >>>>>> On 01.07.14 at 16:37, wrote: >>>> + start = NOW(); >>>> + /* >>>> + * Note that we may fail here if a CPU is hot-unplugged while we are >>>> + * waiting. We will then time out. >>>> + */ >>>> + while ( atomic_read(&vpmu_sched_counter) != allbutself_num ) >>>> + { >>>> + /* Give up after 5 seconds */ >>>> + if ( NOW() > start + SECONDS(5) ) >>>> + { >>>> + printk("vpmu_unload_all: failed to sync\n"); >>>> + ret = -EBUSY; >>>> + break; >>>> + } >>>> + cpu_relax(); >>>> + if ( hypercall_preempt_check() ) >>>> + return hypercall_create_continuation( >>>> + __HYPERVISOR_xenpmu_op, "ih", XENPMU_mode_set, arg); >>>> + } >>> I wonder whether this is race free (wrt another CPU doing something >>> similar) and how you expect the 5s timeout above to ever be reached >>> (you're virtually guaranteed to get asked to preempt earlier). >> Race-wise there is xenpmu_mode_lock in the caller (quoted below). > That wasn't my point: I said "something similar" - imagine another > hypercall behaving this same way, and both hypercalls getting > run concurrently. > Isn't it already possible to have two hypercalls doing continuations at the same time? (Assuming this was your concern) -boris