From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:34897) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1YR2sG-0006Rk-Of for qemu-devel@nongnu.org; Thu, 26 Feb 2015 13:09:30 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1YR2sB-0006kh-5Q for qemu-devel@nongnu.org; Thu, 26 Feb 2015 13:09:24 -0500 Received: from greensocs.com ([193.104.36.180]:17713) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1YR2sA-0006kZ-Qg for qemu-devel@nongnu.org; Thu, 26 Feb 2015 13:09:19 -0500 Message-ID: <54EF614B.1010607@greensocs.com> Date: Thu, 26 Feb 2015 19:09:15 +0100 From: Frederic Konrad MIME-Version: 1.0 References: <1421428797-23697-1-git-send-email-fred.konrad@greensocs.com> <1421428797-23697-2-git-send-email-fred.konrad@greensocs.com> In-Reply-To: Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit Subject: Re: [Qemu-devel] [RFC 01/10] target-arm: protect cpu_exclusive_*. List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Peter Maydell Cc: mttcg@listserver.greensocs.com, "J. Kiszka" , Mark Burton , QEMU Developers , Alexander Graf , Paolo Bonzini On 29/01/2015 16:17, Peter Maydell wrote: > On 16 January 2015 at 17:19, wrote: >> From: KONRAD Frederic >> >> This adds a lock to avoid multiple exclusive access at the same time in case of >> TCG multithread. >> >> Signed-off-by: KONRAD Frederic > All the same comments I had on this patch earlier still apply: > > * I think adding mutex handling code to all the target-* > frontends rather than providing facilities in common > code for them to use is the wrong approach > * You will fail to unlock the mutex if the ldrex or strex > takes a data abort > * This is making no attempt to learn from or unify with > the existing attempts at handling exclusives in linux-user. > When we've done this work we should have a single > mechanism for handling exclusives in a multithreaded > host environment which is used by both softmmu and useronly > configs > > thanks > -- PMM Hi, We see some performance improvment on this SMP, but still have random deadlock in the guest in this function: static inline void arch_spin_lock(arch_spinlock_t *lock) { unsigned long tmp; u32 newval; arch_spinlock_t lockval; prefetchw(&lock->slock); __asm__ __volatile__( "1: ldrex %0, [%3]\n" " add %1, %0, %4\n" " strex %2, %1, [%3]\n" " teq %2, #0\n" " bne 1b" : "=&r" (lockval), "=&r" (newval), "=&r" (tmp) : "r" (&lock->slock), "I" (1 << TICKET_SHIFT) : "cc"); while (lockval.tickets.next != lockval.tickets.owner) { wfe(); lockval.tickets.owner = ACCESS_ONCE(lock->tickets.owner); } smp_mb(); } In the last while loop. That's why we though immediately that it is caused by our implementation of atomic instructions. We failed to unlock the mutex a lot of time during the boot, not because of data abort but I think because we can be interrupted during a strex (eg: there are two branches)? We decided to implement the whole atomic instruction inside an helper but is that possible to get the data with eg: cpu_physical_memory_rw instead of the normal generated code? One other thing which looks suspicious it seems there is one pair of exclusive_addr/exclusive_val per CPU is that normal? Thanks, Fred