* Re: gnu asm help...
@ 2001-06-19 3:06 Rick Hohensee
0 siblings, 0 replies; 10+ messages in thread
From: Rick Hohensee @ 2001-06-19 3:06 UTC (permalink / raw)
To: linux-kernel
The C-names-in-asms stuff is explained in (g?)as.info. The explanation is
a bit strained, but after the third or fourth read it becomes fairly
sensible.
Rick Hohensee
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: gnu asm help...
@ 2001-06-19 20:02 Petr Vandrovec
0 siblings, 0 replies; 10+ messages in thread
From: Petr Vandrovec @ 2001-06-19 20:02 UTC (permalink / raw)
To: Richard B. Johnson; +Cc: linux-kernel
On 19 Jun 01 at 13:21, Richard B. Johnson wrote:
> On Tue, 19 Jun 2001, Timur Tabi wrote:
> > Oh, I see the problem. You could do something like this:
> >
> > cli
> > mov %0, %%eax
> > inc %%eax
> > mov %%eax, %0
> > sti
> >
> > and then return eax, but that won't work on SMP (whereas the "lock inc" does).
> > Doing a global cli might work, though.
Use spinlocks instead of global cli. Global cli can take milliseconds.
> The Intel book(s) state that an interrupt is not acknowledged until
> so many clocks (don't remember the number) after a stack operation.
Reread it. It says 'after operation with ss' - that is after
"mov xxxx,%ss" or "pop %ss", as it is expected that next instruction
will be "movl yyyy,%esp".
Before "lss ...." (it is lss in intel mnemonic...) was invented, you
could not switch your stack safely without this feature, as NMI could
arrive in the middle of your stack switch without blocking all interrupts
after "mov xxxx,%ss".
BTW, if you chain "mov %eax,%ss" back to back, they are executed
in pairs - irq can arrive after even mov, but cannot after odd (at
least on PII and PIII). But it is a bit off topic for L-K (except that
we can try other clones, maybe someone got it wrong?)
Best regards,
Petr Vandrovec
vandrove@vc.cvut.cz
^ permalink raw reply [flat|nested] 10+ messages in thread* Re: gnu asm help...
@ 2001-06-19 1:36 Petr Vandrovec
2001-06-19 15:48 ` Timur Tabi
0 siblings, 1 reply; 10+ messages in thread
From: Petr Vandrovec @ 2001-06-19 1:36 UTC (permalink / raw)
To: Timur Tabi; +Cc: linux-kernel, ashok.raj
On 18 Jun 01 at 18:20, Timur Tabi wrote:
> You want to return the variable? Try this:
>
> static __inline__ unsigned long atomic_inc(atomic_t *v)
> {
> __asm__ __volatile__(
> LOCK "incl %0"
> :"=m" (v->counter)
> :"m" (v->counter));
>
> return v->counter;
> }
No. Another CPU might increment value between LOCK INCL and
fetching v->counter. On ia32 architecture you are almost out of
luck. You can either try building atomic_inc around CMPXCHG,
using it as conditional store (but CMPXCHG is not available
on i386), or you can just guard your atomic variable with
spinlock - but in that case there is no reason for using atomic_t
at all.
Best regards,
Petr Vandrovec
vandrove@vc.cvut.cz
P.S.: Why you need to know that value? You can either rewrite
your code with atomic_dec_and_test/atomic_inc_and_test, or
you overlooked some race, or you have really strange problem.
^ permalink raw reply [flat|nested] 10+ messages in thread* Re: gnu asm help...
2001-06-19 1:36 Petr Vandrovec
@ 2001-06-19 15:48 ` Timur Tabi
2001-06-19 17:21 ` Richard B. Johnson
0 siblings, 1 reply; 10+ messages in thread
From: Timur Tabi @ 2001-06-19 15:48 UTC (permalink / raw)
To: linux-kernel
** Reply to message from "Petr Vandrovec" <VANDROVE@vc.cvut.cz> on Tue, 19 Jun
2001 01:36:26 MET-1
> No. Another CPU might increment value between LOCK INCL and
> fetching v->counter. On ia32 architecture you are almost out of
> luck. You can either try building atomic_inc around CMPXCHG,
> using it as conditional store (but CMPXCHG is not available
> on i386), or you can just guard your atomic variable with
> spinlock - but in that case there is no reason for using atomic_t
> at all.
Oh, I see the problem. You could do something like this:
cli
mov %0, %%eax
inc %%eax
mov %%eax, %0
sti
and then return eax, but that won't work on SMP (whereas the "lock inc" does).
Doing a global cli might work, though.
--
Timur Tabi - ttabi@interactivesi.com
Interactive Silicon - http://www.interactivesi.com
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: gnu asm help...
2001-06-19 15:48 ` Timur Tabi
@ 2001-06-19 17:21 ` Richard B. Johnson
0 siblings, 0 replies; 10+ messages in thread
From: Richard B. Johnson @ 2001-06-19 17:21 UTC (permalink / raw)
To: Timur Tabi; +Cc: linux-kernel
On Tue, 19 Jun 2001, Timur Tabi wrote:
> ** Reply to message from "Petr Vandrovec" <VANDROVE@vc.cvut.cz> on Tue, 19 Jun
> 2001 01:36:26 MET-1
>
>
> > No. Another CPU might increment value between LOCK INCL and
> > fetching v->counter. On ia32 architecture you are almost out of
> > luck. You can either try building atomic_inc around CMPXCHG,
> > using it as conditional store (but CMPXCHG is not available
> > on i386), or you can just guard your atomic variable with
> > spinlock - but in that case there is no reason for using atomic_t
> > at all.
>
> Oh, I see the problem. You could do something like this:
>
> cli
> mov %0, %%eax
> inc %%eax
> mov %%eax, %0
> sti
>
> and then return eax, but that won't work on SMP (whereas the "lock inc" does).
> Doing a global cli might work, though.
The Intel book(s) state that an interrupt is not acknowledged until
so many clocks (don't remember the number) after a stack operation.
Given this, is an 'attack' by another CPU allowed within this time-frame?
If not, you can do:
pushl %ebx
movl INPUT_VALUE(%esp), %eax # Get input value
movl INPUT_PTR(%esp), %ebx # Get input pointer
lock
addl %eax,(%ebx) # Add value
pushl (%ebx) # Put result on stack
popl %eax # Return value in EAX
popl %ebx
It may be worth an experiment.
In any event, you can always use a local lock to make these
operations atomic.
# Stack offsets
VALUE = 0x08
POINTER = 0x0C
.section .data
__local_lock: .long 0
.section .text
.global add_atom
.type add_atom,@function
add_atom:
pushf
cli
incb (__local_lock) # Set the lock
1: cmpb $1,(__local_lock)
jnz 1b
pushl %ebx
movl VALUE(%esp), %eax
movl POINTER(%esp), %ebx
addl %eax, (%ebx)
movl (%ebx), %eax
popl %ebx
decb (__local_lock) # Release the lock
popf
ret
The lock can also be done as:
incb (__local_lock)
1: cmpb $1,(__local_lock)
jz 2f
repz
nop
jmp 1b
2:
(maybe) the CPU being locked loops in low-power mode.
Cheers,
Dick Johnson
Penguin : Linux version 2.4.1 on an i686 machine (799.53 BogoMips).
"Memory is like gasoline. You use it up when you are running. Of
course you get it all back when you reboot..."; Actual explanation
obtained from the Micro$oft help desk.
^ permalink raw reply [flat|nested] 10+ messages in thread
* gnu asm help...
@ 2001-06-18 22:56 Raj, Ashok
2001-06-18 23:18 ` Erik Mouw
` (2 more replies)
0 siblings, 3 replies; 10+ messages in thread
From: Raj, Ashok @ 2001-06-18 22:56 UTC (permalink / raw)
To: Linux-Kernel (E-mail)
Hello asm gurus..
I need a simple (??) change to atomic_inc() functionality. so that i can
increment and return the
value of the variable.
current implementation in linux/include/asm/atomic.h does not do this job.
any help would be greatly appreciated.
ashokr
from atomic.h
also if there is any reference to the gnu asm symtax, please send me a
pointer..
i can understand what the LOCK "incl %0 means.. but not sure what the rest
is for.
thanks
ashokr
static __inline__ void atomic_inc(atomic_t *v)
{
__asm__ __volatile__(
LOCK "incl %0"
:"=m" (v->counter)
:"m" (v->counter));
}
^ permalink raw reply [flat|nested] 10+ messages in thread* Re: gnu asm help...
2001-06-18 22:56 Raj, Ashok
@ 2001-06-18 23:18 ` Erik Mouw
2001-06-19 6:25 ` Bohdan Vlasyuk
2001-06-18 23:20 ` Timur Tabi
2001-06-19 7:44 ` Alan Cox
2 siblings, 1 reply; 10+ messages in thread
From: Erik Mouw @ 2001-06-18 23:18 UTC (permalink / raw)
To: Raj, Ashok; +Cc: Linux-Kernel (E-mail)
On Mon, Jun 18, 2001 at 03:56:50PM -0700, Raj, Ashok wrote:
> i can understand what the LOCK "incl %0 means.. but not sure what the rest
> is for.
>
> thanks
> ashokr
>
> static __inline__ void atomic_inc(atomic_t *v)
> {
> __asm__ __volatile__(
> LOCK "incl %0"
> :"=m" (v->counter)
> :"m" (v->counter));
> }
I also don't know the exact meaning, but here are two nice tutorials
about inline assembly:
http://www-106.ibm.com/developerworks/linux/library/l-ia.html
http://www.uwsg.indiana.edu/hypermail/linux/kernel/9804.2/0953.html
Erik
--
J.A.K. (Erik) Mouw, Information and Communication Theory Group, Department
of Electrical Engineering, Faculty of Information Technology and Systems,
Delft University of Technology, PO BOX 5031, 2600 GA Delft, The Netherlands
Phone: +31-15-2783635 Fax: +31-15-2781843 Email: J.A.K.Mouw@its.tudelft.nl
WWW: http://www-ict.its.tudelft.nl/~erik/
^ permalink raw reply [flat|nested] 10+ messages in thread* Re: gnu asm help...
2001-06-18 22:56 Raj, Ashok
2001-06-18 23:18 ` Erik Mouw
@ 2001-06-18 23:20 ` Timur Tabi
2001-06-19 7:44 ` Alan Cox
2 siblings, 0 replies; 10+ messages in thread
From: Timur Tabi @ 2001-06-18 23:20 UTC (permalink / raw)
To: Linux-Kernel (E-mail)
** Reply to message from "Raj, Ashok" <ashok.raj@intel.com> on Mon, 18 Jun 2001
15:56:50 -0700
> also if there is any reference to the gnu asm symtax, please send me a
> pointer..
There's lots
> i can understand what the LOCK "incl %0 means.. but not sure what the rest
> is for.
LOCK just means the x86 "lock" prefix.
incl is the 32-bit version of "inc" (incremement)
You want to return the variable? Try this:
static __inline__ unsigned long atomic_inc(atomic_t *v)
{
__asm__ __volatile__(
LOCK "incl %0"
:"=m" (v->counter)
:"m" (v->counter));
return v->counter;
}
--
Timur Tabi - ttabi@interactivesi.com
Interactive Silicon - http://www.interactivesi.com
^ permalink raw reply [flat|nested] 10+ messages in thread* Re: gnu asm help...
2001-06-18 22:56 Raj, Ashok
2001-06-18 23:18 ` Erik Mouw
2001-06-18 23:20 ` Timur Tabi
@ 2001-06-19 7:44 ` Alan Cox
2 siblings, 0 replies; 10+ messages in thread
From: Alan Cox @ 2001-06-19 7:44 UTC (permalink / raw)
To: Raj, Ashok; +Cc: "Linux-Kernel (E-mail)"
> I need a simple (??) change to atomic_inc() functionality. so that i can
> increment and return the value of the variable.
Please don't blindly change atomic.h to do this. A large number of processors
don't have the x86 'xadd' functionality. Create/use seperate functions instead
^ permalink raw reply [flat|nested] 10+ messages in thread
end of thread, other threads:[~2001-06-19 18:03 UTC | newest]
Thread overview: 10+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2001-06-19 3:06 gnu asm help Rick Hohensee
-- strict thread matches above, loose matches on Subject: below --
2001-06-19 20:02 Petr Vandrovec
2001-06-19 1:36 Petr Vandrovec
2001-06-19 15:48 ` Timur Tabi
2001-06-19 17:21 ` Richard B. Johnson
2001-06-18 22:56 Raj, Ashok
2001-06-18 23:18 ` Erik Mouw
2001-06-19 6:25 ` Bohdan Vlasyuk
2001-06-18 23:20 ` Timur Tabi
2001-06-19 7:44 ` Alan Cox
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox