public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
* Recursive spinlocks for Network Recursion Bugs in 2.6.18
@ 2006-12-09  7:31 Jeffrey V. Merkey
  2006-12-12  9:20 ` Jarek Poplawski
  0 siblings, 1 reply; 2+ messages in thread
From: Jeffrey V. Merkey @ 2006-12-09  7:31 UTC (permalink / raw)
  To: linux-kernel



This code segment in /net/core/dev.c is a prime example of the need for 
recursive spin locks.

       if (dev->flags & IFF_UP) {
                int cpu = smp_processor_id(); /* ok because BHs are off */

                if (dev->xmit_lock_owner != cpu) {

                        HARD_TX_LOCK(dev, cpu);

                        if (!netif_queue_stopped(dev)) {
                                rc = 0;
                                if (!dev_hard_start_xmit(skb, dev)) {
                                        HARD_TX_UNLOCK(dev);
                                        goto out;
                                }
                        }
                        HARD_TX_UNLOCK(dev);
                        if (net_ratelimit())
                                printk(KERN_CRIT "Virtual device %s asks 
to "
                                       "queue packet!\n", dev->name);
                } else {
                        /* Recursion is detected! It is possible,
                         * unfortunately */
                        if (net_ratelimit())
                                printk(KERN_CRIT "Dead loop on virtual 
device "
                                       "%s, fix it urgently!\n", dev->name);
                }
        }

Recursive spinlocks perform the logic

rspin_lock(spin_lock)
{
   if (spin_lock->lock->cpu_owner = cpu I am on) && (spin_lock->lock)
   {
       spin_lock->use_count++;
   }
    else
    {
          spin_lock(lock)
          lock->cpu_owner = cpu I am on;
          lock->use_count++;
    }
}

rspin_unlock(spin_lock)
{
   if (spin_lock->lock->cpu_owner = cpu I am on) && (spin_lock->use_count)
   {
       spin_lock->use_count--;
   }
    else
    {   
           lock->use_count++;     
           lock->cpu_owner = cpu I am on;
           spin_unlock(lock)
    }
}

One implementation of this is:

LONG rspin_lock(rlock_t *rlock)
{
   register LONG proc = get_processor_id();
   register LONG retCode;

   if (rlock->lockValue && rlock->processor == (proc + 1))
   {
      rlock->count++;
      retCode = 1;
   }
   else
   {
      dspin_lock(&rlock->lockValue);
      rlock->processor = (proc + 1);
      retCode = 0;
   }

   return retCode;

}
LONG rspin_unlock(rlock_t *rlock)
{

   register LONG retCode;

   if (rlock->count)
   {
      rlock->count--;
      retCode = 1;
   }
   else
   {
      rlock->processor = 0;
      dspin_unlock(&rlock->lockValue);
      retCode = 0;
   }

   return retCode;
}

Just a suggestion.   Would be a useful primitive for a lot of context 
implementations where users turn on interrupts inside of nested spin 
lock code.

Jeff





^ permalink raw reply	[flat|nested] 2+ messages in thread

* Re: Recursive spinlocks for Network Recursion Bugs in 2.6.18
  2006-12-09  7:31 Recursive spinlocks for Network Recursion Bugs in 2.6.18 Jeffrey V. Merkey
@ 2006-12-12  9:20 ` Jarek Poplawski
  0 siblings, 0 replies; 2+ messages in thread
From: Jarek Poplawski @ 2006-12-12  9:20 UTC (permalink / raw)
  To: Jeffrey V. Merkey; +Cc: linux-kernel

On 09-12-2006 08:31, Jeffrey V. Merkey wrote:
> 
> 
> This code segment in /net/core/dev.c is a prime example of the need for 
> recursive spin locks.
...
> Recursive spinlocks perform the logic
...
> LONG rspin_lock(rlock_t *rlock)
...
> LONG rspin_unlock(rlock_t *rlock)
...

Could you give some hint how this code from dev.c
should be changed to gain by this?

Jarek P. 

^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2006-12-12  9:13 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2006-12-09  7:31 Recursive spinlocks for Network Recursion Bugs in 2.6.18 Jeffrey V. Merkey
2006-12-12  9:20 ` Jarek Poplawski

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox