* [PATCH 5/5] percpu_counter: only disable preemption if needed in add_unless_lt()
@ 2010-12-23 3:56 Alex Elder
2010-12-23 6:31 ` Dave Chinner
0 siblings, 1 reply; 3+ messages in thread
From: Alex Elder @ 2010-12-23 3:56 UTC (permalink / raw)
To: xfs
In __percpu_counter_add_unless_lt() we don't need to disable
preemption unless we're manipulating a per-cpu variable. That only
happens in a limited case, so narrow the scope of that preemption to
surround that case. This makes the "out" label rather unnecessary,
so replace a couple "goto out" calls to just return.
Signed-off-by: Alex Elder <aelder@sgi.com>
---
lib/percpu_counter.c | 21 ++++++++++-----------
1 file changed, 10 insertions(+), 11 deletions(-)
Index: b/lib/percpu_counter.c
===================================================================
--- a/lib/percpu_counter.c
+++ b/lib/percpu_counter.c
@@ -232,8 +232,6 @@ int __percpu_counter_add_unless_lt(struc
int cpu;
int ret = -1;
- preempt_disable();
-
/*
* Check to see if rough count will be sufficient for
* comparison. First, if the upper bound is too low,
@@ -241,7 +239,7 @@ int __percpu_counter_add_unless_lt(struc
*/
count = percpu_counter_read(fbc);
if (count + error + amount < threshold)
- goto out;
+ return -1;
/*
* Next, if the lower bound is above the threshold, we can
@@ -251,12 +249,15 @@ int __percpu_counter_add_unless_lt(struc
if (count - error + amount > threshold) {
s32 *pcount = this_cpu_ptr(fbc->counters);
+ preempt_disable();
+ pcount = this_cpu_ptr(fbc->counters);
count = *pcount + amount;
if (abs(count) < batch) {
*pcount = count;
- ret = 1;
- goto out;
+ preempt_enable();
+ return 1;
}
+ preempt_enable();
}
/*
@@ -281,10 +282,9 @@ int __percpu_counter_add_unless_lt(struc
}
/*
- * Result is withing the error margin. Run an open-coded sum of the
- * per-cpu counters to get the exact value at this point in time,
- * and if the result greater than the threshold, add the amount to
- * the global counter.
+ * Now add in all the per-cpu counters to compute the exact
+ * value at this point in time, and if the result greater
+ * than the threshold, add the amount to the global counter.
*/
count = fbc->count;
for_each_online_cpu(cpu) {
@@ -301,8 +301,7 @@ int __percpu_counter_add_unless_lt(struc
}
out_unlock:
spin_unlock(&fbc->lock);
-out:
- preempt_enable();
+
return ret;
}
EXPORT_SYMBOL(__percpu_counter_add_unless_lt);
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 3+ messages in thread* Re: [PATCH 5/5] percpu_counter: only disable preemption if needed in add_unless_lt()
2010-12-23 3:56 [PATCH 5/5] percpu_counter: only disable preemption if needed in add_unless_lt() Alex Elder
@ 2010-12-23 6:31 ` Dave Chinner
2010-12-29 16:29 ` Alex Elder
0 siblings, 1 reply; 3+ messages in thread
From: Dave Chinner @ 2010-12-23 6:31 UTC (permalink / raw)
To: Alex Elder; +Cc: xfs
On Wed, Dec 22, 2010 at 09:56:42PM -0600, Alex Elder wrote:
> In __percpu_counter_add_unless_lt() we don't need to disable
> preemption unless we're manipulating a per-cpu variable. That only
> happens in a limited case, so narrow the scope of that preemption to
> surround that case. This makes the "out" label rather unnecessary,
> so replace a couple "goto out" calls to just return.
>
> Signed-off-by: Alex Elder <aelder@sgi.com>
>
> ---
> lib/percpu_counter.c | 21 ++++++++++-----------
> 1 file changed, 10 insertions(+), 11 deletions(-)
>
> Index: b/lib/percpu_counter.c
> ===================================================================
> --- a/lib/percpu_counter.c
> +++ b/lib/percpu_counter.c
> @@ -232,8 +232,6 @@ int __percpu_counter_add_unless_lt(struc
> int cpu;
> int ret = -1;
>
> - preempt_disable();
> -
> /*
> * Check to see if rough count will be sufficient for
> * comparison. First, if the upper bound is too low,
> @@ -241,7 +239,7 @@ int __percpu_counter_add_unless_lt(struc
> */
> count = percpu_counter_read(fbc);
> if (count + error + amount < threshold)
> - goto out;
> + return -1;
>
> /*
> * Next, if the lower bound is above the threshold, we can
> @@ -251,12 +249,15 @@ int __percpu_counter_add_unless_lt(struc
> if (count - error + amount > threshold) {
> s32 *pcount = this_cpu_ptr(fbc->counters);
>
> + preempt_disable();
> + pcount = this_cpu_ptr(fbc->counters);
> count = *pcount + amount;
> if (abs(count) < batch) {
> *pcount = count;
> - ret = 1;
> - goto out;
> + preempt_enable();
> + return 1;
> }
> + preempt_enable();
> }
Regardless of the other changes, this is not valid. That is:
amount = -1;
count = fbc->count;
.....
<get preempted>
<other operations may significantly change fbc->count (i.e
lots more than error will catch), so the current value of
count in this context is wrong and cannot be trusted>
<start running again>
if (count - error + amount > threshold) {
<not valid to run this lockless optimisation based
on a stale count value>
....
}
Effectively, if we want to be able to use lockless optimisations, we
need to ensure that the value of the global counter that we read
remains within the given error bounds until we have finished making
the lockless modification. That is done via disabling preemption
across the entire function...
Cheers,
Dave.
--
Dave Chinner
david@fromorbit.com
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 3+ messages in thread* Re: [PATCH 5/5] percpu_counter: only disable preemption if needed in add_unless_lt()
2010-12-23 6:31 ` Dave Chinner
@ 2010-12-29 16:29 ` Alex Elder
0 siblings, 0 replies; 3+ messages in thread
From: Alex Elder @ 2010-12-29 16:29 UTC (permalink / raw)
To: Dave Chinner; +Cc: xfs
On Thu, 2010-12-23 at 17:31 +1100, Dave Chinner wrote:
> On Wed, Dec 22, 2010 at 09:56:42PM -0600, Alex Elder wrote:
> > In __percpu_counter_add_unless_lt() we don't need to disable
> > preemption unless we're manipulating a per-cpu variable. That only
> > happens in a limited case, so narrow the scope of that preemption to
> > surround that case. This makes the "out" label rather unnecessary,
> > so replace a couple "goto out" calls to just return.
. . .
>
> Regardless of the other changes, this is not valid. That is:
You're right. I was thinking about updates to fbc->count
being protected by the spinlock, but that doesn't address
the cached value getting stale if this CPU gets preempted
and another thread passes through this code before the
first one gets resumed.
I'm also looking at the other patches and your responses
and will be done with it today. I don't want to hold
up your pull request any longer.
If you found anything of value in the little series I posted
feel free to incorporate it into your own changes.
-Alex
> amount = -1;
> count = fbc->count;
> .....
>
> <get preempted>
>
> <other operations may significantly change fbc->count (i.e
> lots more than error will catch), so the current value of
> count in this context is wrong and cannot be trusted>
>
> <start running again>
>
> if (count - error + amount > threshold) {
> <not valid to run this lockless optimisation based
> on a stale count value>
>
> ....
> }
>
> Effectively, if we want to be able to use lockless optimisations, we
> need to ensure that the value of the global counter that we read
> remains within the given error bounds until we have finished making
> the lockless modification. That is done via disabling preemption
> across the entire function...
>
> Cheers,
>
> Dave.
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 3+ messages in thread
end of thread, other threads:[~2010-12-29 16:27 UTC | newest]
Thread overview: 3+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2010-12-23 3:56 [PATCH 5/5] percpu_counter: only disable preemption if needed in add_unless_lt() Alex Elder
2010-12-23 6:31 ` Dave Chinner
2010-12-29 16:29 ` Alex Elder
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox