linux-arch.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* volatile and atomic_t/spinlock_t
@ 2007-06-05 11:48 Heiko Carstens
  2007-06-05 18:38 ` Luck, Tony
  0 siblings, 1 reply; 3+ messages in thread
From: Heiko Carstens @ 2007-06-05 11:48 UTC (permalink / raw)
  To: linux-kernel, linux-arch

I'm just wondering why we have an inconsistency between several archs when
it comes to the definitions of atomic_t, atomic64_t, spinlock_t and their
accessors. Currently we have on most architectures something like

typedef struct { volatile int counter; } atomic_t;

except for i386/x86_64 which has

typedef struct { int counter; } atomic_t;

but then again we have (including x86_64)

typedef struct { volatile long counter; } atomic64_t;

In addition we have

#define atomic_read(v)		((v)->counter)
#define atomic64_read(v)	((v)->counter)

So something like

(1)	while (atomic_read(&v));

May or may not work. Yes, I know it should be

(2)	while (atomic_read(&v))
		cpu_relax();

I'm just wondering about the inconsistency between 32 and 64 bit here and if
(1) is supposed to work or not.

When it comes to spinlock_t we have (on i386):

typedef struct {
	unsigned int slock;
} raw_spinlock_t;

and

static inline int __raw_spin_is_locked(raw_spinlock_t *x)
{
	return *(volatile signed char *)(&(x)->slock) <= 0;
}

Most other architectures have something like this

typedef struct {
	volatile unsigned int slock;
} raw_spinlock_t;

and

#define __raw_spin_is_locked(x)	((x)->slock != 0)

So is

	while (__raw_spin_is_locked(&v));

supposed to work? Or should that be 

	while (__raw_spin_is_locked(&v))
		cpu_relax();

as well and all the volatiles can/should go away?

^ permalink raw reply	[flat|nested] 3+ messages in thread

* RE: volatile and atomic_t/spinlock_t
  2007-06-05 11:48 volatile and atomic_t/spinlock_t Heiko Carstens
@ 2007-06-05 18:38 ` Luck, Tony
  2007-06-05 22:17   ` Heiko Carstens
  0 siblings, 1 reply; 3+ messages in thread
From: Luck, Tony @ 2007-06-05 18:38 UTC (permalink / raw)
  To: Heiko Carstens, linux-kernel, linux-arch

> So is
>
>	while (__raw_spin_is_locked(&v));
>
> supposed to work? Or should that be 
>
>	while (__raw_spin_is_locked(&v))
>		cpu_relax();
>
> as well and all the volatiles can/should go away?

cpu_relax() is a really good idea in every spinloop on
hyper-threaded cores.  It lets the h/w know that we aren't
doing anything useful here, so resources and power can be
diverted to other threads sharing the core.

Avoiding the need for volatile or other compiler optimizer
defeating tricks is a side benefit.

-Tony

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: volatile and atomic_t/spinlock_t
  2007-06-05 18:38 ` Luck, Tony
@ 2007-06-05 22:17   ` Heiko Carstens
  0 siblings, 0 replies; 3+ messages in thread
From: Heiko Carstens @ 2007-06-05 22:17 UTC (permalink / raw)
  To: Luck, Tony; +Cc: linux-kernel, linux-arch

On Tue, Jun 05, 2007 at 11:38:27AM -0700, Luck, Tony wrote:
> > So is
> >
> >	while (__raw_spin_is_locked(&v));
> >
> > supposed to work? Or should that be 
> >
> >	while (__raw_spin_is_locked(&v))
> >		cpu_relax();
> >
> > as well and all the volatiles can/should go away?
> 
> cpu_relax() is a really good idea in every spinloop on
> hyper-threaded cores.  It lets the h/w know that we aren't
> doing anything useful here, so resources and power can be
> diverted to other threads sharing the core.
> 
> Avoiding the need for volatile or other compiler optimizer
> defeating tricks is a side benefit.

Currently it is already that it has to be

	while (__raw_spin_is_locked(&v))
		cpu_relax();

Just like in __raw_spin_unlock_wait(). Oh well, I should have
checked more before posting...

^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2007-06-05 22:17 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2007-06-05 11:48 volatile and atomic_t/spinlock_t Heiko Carstens
2007-06-05 18:38 ` Luck, Tony
2007-06-05 22:17   ` Heiko Carstens

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).