* [PATCH] preempt_count overflow with brlocks
@ 2002-10-08 1:53 Robert Love
0 siblings, 0 replies; only message in thread
From: Robert Love @ 2002-10-08 1:53 UTC (permalink / raw)
To: torvalds; +Cc: akpm, linux-kernel
Linus,
Now that brlocks loop over NR_CPUS, on SMP every brlock/brunlock results
in the acquire/release of 32 locks. This incs/decs the preempt_count by
32.
Since we only have 7 bits now for actually storing the lock depth, we
cannot nest but 3 locks deep. I doubt we ever acquire three brlocks
concurrently, but it is still a concern.
Attached patch disables/enables preemption explicitly once and only once
for each lock/unlock. This is also an optimization as it removes 31
incs, decs, and conditionals. :)
Problem reported by Andrew Morton.
Patch is against 2.5.41, please apply.
Robert Love
diff -urN linux-2.5.41/lib/brlock.c linux/lib/brlock.c
--- linux-2.5.41/lib/brlock.c 2002-10-07 14:24:45.000000000 -0400
+++ linux/lib/brlock.c 2002-10-07 21:38:02.000000000 -0400
@@ -24,8 +24,9 @@
{
int i;
+ preempt_disable();
for (i = 0; i < NR_CPUS; i++)
- write_lock(&__brlock_array[i][idx]);
+ _raw_write_lock(&__brlock_array[i][idx]);
}
void __br_write_unlock (enum brlock_indices idx)
@@ -33,7 +34,8 @@
int i;
for (i = 0; i < NR_CPUS; i++)
- write_unlock(&__brlock_array[i][idx]);
+ _raw_write_unlock(&__brlock_array[i][idx]);
+ preempt_enable();
}
#else /* ! __BRLOCK_USE_ATOMICS */
@@ -48,11 +50,12 @@
{
int i;
+ preempt_disable();
again:
- spin_lock(&__br_write_locks[idx].lock);
+ _raw_spin_lock(&__br_write_locks[idx].lock);
for (i = 0; i < NR_CPUS; i++)
if (__brlock_array[i][idx] != 0) {
- spin_unlock(&__br_write_locks[idx].lock);
+ _raw_spin_unlock(&__br_write_locks[idx].lock);
barrier();
cpu_relax();
goto again;
^ permalink raw reply [flat|nested] only message in thread
only message in thread, other threads:[~2002-10-08 1:47 UTC | newest]
Thread overview: (only message) (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2002-10-08 1:53 [PATCH] preempt_count overflow with brlocks Robert Love
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox