From mboxrd@z Thu Jan 1 00:00:00 1970 From: Boris Ostrovsky Subject: Re: [PATCH 1/3] x86/xen: use memory barriers when enabling local irqs Date: Tue, 13 Aug 2013 11:18:41 -0400 Message-ID: <520A4E51.1080102@oracle.com> References: <1376404296-7012-1-git-send-email-david.vrabel@citrix.com> <1376404296-7012-2-git-send-email-david.vrabel@citrix.com> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii"; Format="flowed" Content-Transfer-Encoding: 7bit Return-path: In-Reply-To: <1376404296-7012-2-git-send-email-david.vrabel@citrix.com> List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Sender: xen-devel-bounces@lists.xen.org Errors-To: xen-devel-bounces@lists.xen.org To: David Vrabel Cc: Boris Ostrovsky , xen-devel@lists.xen.org List-Id: xen-devel@lists.xenproject.org On 08/13/2013 10:31 AM, David Vrabel wrote: > From: David Vrabel > > Because vcpu->evtchn_upcall_mask and vcpu->evtchn_upcall_pending are > be written by Xen as well as the guest, using barrier() (a > compiler-only barrier) in xen_enable_irq() and xen_restore_fl() is not > sufficient. Unneeded 'be' and xen_enable_irq -> xen_irq_enable > > Use mb() (a full memory barrier) instead. Are evtchn_upcall_mask and evtchn_upcall_pending written from the same (physical) processor during the potential race? If yes then I am not sure this will make any difference since I think sysret/iret, syscall and interrupts have implicit mfence. It won't hurt to have mb(), all I am trying to say that this may not be the cause of lost events. -boris > > Signed-off-by: David Vrabel > --- > arch/x86/xen/irq.c | 4 ++-- > 1 files changed, 2 insertions(+), 2 deletions(-) > > diff --git a/arch/x86/xen/irq.c b/arch/x86/xen/irq.c > index 01a4dc0..1a8d0d4 100644 > --- a/arch/x86/xen/irq.c > +++ b/arch/x86/xen/irq.c > @@ -60,7 +60,7 @@ static void xen_restore_fl(unsigned long flags) > > if (flags == 0) { > preempt_check_resched(); > - barrier(); /* unmask then check (avoid races) */ > + mb(); /* unmask then check (avoid races) */ > if (unlikely(vcpu->evtchn_upcall_pending)) > xen_force_evtchn_callback(); > } > @@ -93,7 +93,7 @@ static void xen_irq_enable(void) > /* Doesn't matter if we get preempted here, because any > pending event will get dealt with anyway. */ > > - barrier(); /* unmask then check (avoid races) */ > + mb(); /* unmask then check (avoid races) */ > if (unlikely(vcpu->evtchn_upcall_pending)) > xen_force_evtchn_callback(); > }