qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: Sebastian Huber <sebastian.huber@embedded-brains.de>
To: qemu-devel@nongnu.org
Subject: [Qemu-devel] SEV and WFE instructions on ARM
Date: Thu, 06 Jun 2013 16:34:29 +0200	[thread overview]
Message-ID: <51B09DF5.9060606@embedded-brains.de> (raw)

Hello,

I want to use Qemu to test some SMP code.  For this I set up Qemu to fire up 
two Cortex-A9 MPCore CPUs.  I have the following ticket lock implementation:

static inline void _ARM_Data_memory_barrier( void )
{
   __asm__ volatile ( "dmb" : : : "memory" );
}

static inline void _ARM_Data_synchronization_barrier( void )
{
   __asm__ volatile ( "dsb" : : : "memory" );
}

static inline void _ARM_Send_event( void )
{
   __asm__ volatile ( "sev" : : : "memory" );
}

static inline void _ARM_Wait_for_event( void )
{
   __asm__ volatile ( "wfe" : : : "memory" );
}

typedef struct {
   uint32_t next_ticket;
   uint32_t now_serving;
} CPU_SMP_lock_Control;

#define CPU_SMP_LOCK_INITIALIZER { 0, 0 }

static inline void _CPU_SMP_lock_Acquire( CPU_SMP_lock_Control *lock )
{
   uint32_t my_ticket;
   uint32_t next_ticket;
   uint32_t status;

   __asm__ volatile (
     "1: ldrex %[my_ticket], [%[next_ticket_addr]]\n"
     "add %[next_ticket], %[my_ticket], #1\n"
     "strex %[status], %[next_ticket], [%[next_ticket_addr]]\n"
     "teq %[status], #0\n"
     "bne 1b"
     : [my_ticket] "=&r" (my_ticket),
       [next_ticket] "=&r" (next_ticket),
       [status] "=&r" (status)
     : [next_ticket_addr] "r" (&lock->next_ticket)
     : "cc", "memory"
   );

   while ( my_ticket != lock->now_serving ) {
     _ARM_Wait_for_event();
   }

   _ARM_Data_memory_barrier();
}

static inline void _CPU_SMP_lock_Release( CPU_SMP_lock_Control *lock )
{
   _ARM_Data_memory_barrier();
   ++lock->now_serving;
   _ARM_Data_synchronization_barrier();
   _ARM_Send_event();
}

I run the following code on both CPUs:

while (1) {
   _CPU_SMP_lock_Acquire(&lock);
   ++global_counter;
   _CPU_SMP_lock_Release(&lock);
}

It seems that the SEV/WFE instructions are implemented as a nop on Qemu (see in 
file "target-arm/translate.c" function gen_nop_hint()).  So the simulator 
executes the busy wait loop most of the time.  Is it possible to trigger a 
schedule event in Qemu which stops the simulation on one CPU and selects 
another CPU instead?

-- 
Sebastian Huber, embedded brains GmbH

Address : Dornierstr. 4, D-82178 Puchheim, Germany
Phone   : +49 89 189 47 41-16
Fax     : +49 89 189 47 41-09
E-Mail  : sebastian.huber@embedded-brains.de
PGP     : Public key available on request.

Diese Nachricht ist keine geschäftliche Mitteilung im Sinne des EHUG.

             reply	other threads:[~2013-06-06 14:38 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2013-06-06 14:34 Sebastian Huber [this message]
2013-06-06 18:17 ` [Qemu-devel] SEV and WFE instructions on ARM Peter Maydell
2013-06-07  8:24   ` Sebastian Huber
  -- strict thread matches above, loose matches on Subject: below --
2013-11-26 12:51 Walsh, Benjamin
2013-11-26 20:28 ` Peter Maydell

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=51B09DF5.9060606@embedded-brains.de \
    --to=sebastian.huber@embedded-brains.de \
    --cc=qemu-devel@nongnu.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).