* Active waiting with yield()
@ 2008-11-14 18:59 Mikulas Patocka
2008-11-14 19:06 ` Alan Cox
0 siblings, 1 reply; 17+ messages in thread
From: Mikulas Patocka @ 2008-11-14 18:59 UTC (permalink / raw)
To: linux-kernel; +Cc: mingo, rml, Alasdair G Kergon, Milan Broz
Hi
Every time I write yield() somewhere into my code, there comes a comment
"real-time people don't like yield()". Without any explanation why yield()
is not suitable at that specific point, just that "people don't like it".
We all know that waiting with yield() is bad for general code that
processes requests, because repeated scheduling wastes extra CPU time and
the process waiting for some event with yield() is woken up late. In these
normal cases, wait queues should be used.
However, there is code where in my opinion using yield() for waiting is
appropriate, for example:
* driver unload --- check the count of outstanding requests and call
yield() repeatedly until it goes to zero, then unload.
* some rare race contition --- a workaround for a race that triggers once
per hour for one user in several years doesn't really have to be fast.
A typical example where yield() can be used is a driver that counts a
number of outstanding requests --- when the user tries to unload the
driver, the driver will make sure that new requests won't come, repeatedly
yield()s until the count of requests reaches zero and then destroys its
data structures.
Here, yield() has some advantages over wait queues, because:
* reduced size of data structures (and reduced cache footprint for the hot
path that actually processes requests)
* slightly reduced code size even on the hot path (no need to wake up an
empty queue)
* less possibility for bugs coming from the fact that someone forgets to
wake the queue up
The downside of yield is slower unloading of the driver by few tens of
miliseconds, but the user doesn't really care about fractions of a second
when unloading drivers.
--- but people complain even if I use yield() in these cases. None of
those complains contains any valid technical argument --- just that
"someone said that yield() is bad". Is there a real reason why yield() in
these cases is bad? (for example that with some real-time patches,
yield()ing high-priority thread would be spinning forever, killing the
system?) Or should I use msleep(1) instead? Or should wait queues really
be used even in those performance-insensitive cases?
Mikulas
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: Active waiting with yield()
2008-11-14 18:59 Active waiting with yield() Mikulas Patocka
@ 2008-11-14 19:06 ` Alan Cox
2008-11-14 19:34 ` Mikulas Patocka
0 siblings, 1 reply; 17+ messages in thread
From: Alan Cox @ 2008-11-14 19:06 UTC (permalink / raw)
To: Mikulas Patocka; +Cc: linux-kernel, mingo, rml, Alasdair G Kergon, Milan Broz
> * driver unload --- check the count of outstanding requests and call
> yield() repeatedly until it goes to zero, then unload.
Use a wakeup when the request count hits zero
> * reduced size of data structures (and reduced cache footprint for the hot
> path that actually processes requests)
The CPU will predict the non-wakeup path if that is normal. You can even
make the wakeup use something like
if (exiting & count == 0)
to get the prediction righ
> The downside of yield is slower unloading of the driver by few tens of
> miliseconds, but the user doesn't really care about fractions of a second
> when unloading drivers.
And more power usage, plus extremely rude behaviour when virtualising.
There are cases you have to use cpu_relax/spins or yield simply because
the hardware doesn't feel like providing interrupts when you need them,
but for the general case its best to use proper sleeping.
Remember also you can use a single wait queue for an entire driver for
obscure happenings like unloads.
Alan
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: Active waiting with yield()
2008-11-14 19:06 ` Alan Cox
@ 2008-11-14 19:34 ` Mikulas Patocka
2008-11-14 20:57 ` Peter Zijlstra
2008-11-14 21:21 ` Alan Cox
0 siblings, 2 replies; 17+ messages in thread
From: Mikulas Patocka @ 2008-11-14 19:34 UTC (permalink / raw)
To: Alan Cox; +Cc: linux-kernel, mingo, rml, Alasdair G Kergon, Milan Broz
On Fri, 14 Nov 2008, Alan Cox wrote:
> > * driver unload --- check the count of outstanding requests and call
> > yield() repeatedly until it goes to zero, then unload.
>
> Use a wakeup when the request count hits zero
>
> > * reduced size of data structures (and reduced cache footprint for the hot
> > path that actually processes requests)
>
> The CPU will predict the non-wakeup path if that is normal. You can even
> make the wakeup use something like
>
> if (exiting & count == 0)
>
> to get the prediction righ
>
> > The downside of yield is slower unloading of the driver by few tens of
> > miliseconds, but the user doesn't really care about fractions of a second
> > when unloading drivers.
>
> And more power usage, plus extremely rude behaviour when virtualising.
How these unlikely cases can be rude?
If I have a race condition that gets triggered just for one user in the
world when repeatedly loading & unloading a driver for an hour, and I use
yield() to solve it, what's wrong with it? A wait queue increases cache
footprint for every user. (even if I use preallocated hashed wait queue,
it still eats a cacheline to access it and find out that it's empty)
Mikulas
> There are cases you have to use cpu_relax/spins or yield simply because
> the hardware doesn't feel like providing interrupts when you need them,
> but for the general case its best to use proper sleeping.
>
> Remember also you can use a single wait queue for an entire driver for
> obscure happenings like unloads.
>
> Alan
>
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: Active waiting with yield()
2008-11-14 19:34 ` Mikulas Patocka
@ 2008-11-14 20:57 ` Peter Zijlstra
2008-11-14 21:41 ` Mikulas Patocka
2008-11-14 21:21 ` Alan Cox
1 sibling, 1 reply; 17+ messages in thread
From: Peter Zijlstra @ 2008-11-14 20:57 UTC (permalink / raw)
To: Mikulas Patocka
Cc: Alan Cox, linux-kernel, mingo, rml, Alasdair G Kergon, Milan Broz
On Fri, 2008-11-14 at 14:34 -0500, Mikulas Patocka wrote:
>
> On Fri, 14 Nov 2008, Alan Cox wrote:
>
> > > * driver unload --- check the count of outstanding requests and call
> > > yield() repeatedly until it goes to zero, then unload.
> >
> > Use a wakeup when the request count hits zero
> >
> > > * reduced size of data structures (and reduced cache footprint for the hot
> > > path that actually processes requests)
> >
> > The CPU will predict the non-wakeup path if that is normal. You can even
> > make the wakeup use something like
> >
> > if (exiting & count == 0)
> >
> > to get the prediction righ
> >
> > > The downside of yield is slower unloading of the driver by few tens of
> > > miliseconds, but the user doesn't really care about fractions of a second
> > > when unloading drivers.
> >
> > And more power usage, plus extremely rude behaviour when virtualising.
>
> How these unlikely cases can be rude?
>
> If I have a race condition that gets triggered just for one user in the
> world when repeatedly loading & unloading a driver for an hour, and I use
> yield() to solve it, what's wrong with it? A wait queue increases cache
> footprint for every user. (even if I use preallocated hashed wait queue,
> it still eats a cacheline to access it and find out that it's empty)
Depending on the situation, yield() might be a NOP and therefore not
wait at all and possibly lock up the machine.
Consider the task in question to be the highest priority RT task on the
system, you doing: while (!condition) yield(); will lock up the system,
because whatever is to make condition true will never get a chance to
run (not considering SMP).
Clearly you don't understand it, please refrain from using it. Use
regular condition variables (waitqueues).
The rules about yield are:
- You're likely wrong, don't use it.
- Seriously, you don't need it.
- If you still think you do, goto 1.
In all of the kernel there is 1 valid use (and it might only be in the
-rt kernel - didn't check mainline recently).
The _ONLY_ valid use case of yield(), is if you have two equal priority
FIFO threads that co-depend. And that situation is almost always
avoidable.
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: Active waiting with yield()
2008-11-14 20:57 ` Peter Zijlstra
@ 2008-11-14 21:41 ` Mikulas Patocka
2008-11-15 22:55 ` Peter Zijlstra
0 siblings, 1 reply; 17+ messages in thread
From: Mikulas Patocka @ 2008-11-14 21:41 UTC (permalink / raw)
To: Peter Zijlstra
Cc: Alan Cox, linux-kernel, mingo, rml, Alasdair G Kergon, Milan Broz
On Fri, 14 Nov 2008, Peter Zijlstra wrote:
> On Fri, 2008-11-14 at 14:34 -0500, Mikulas Patocka wrote:
> >
> > On Fri, 14 Nov 2008, Alan Cox wrote:
> >
> > > > * driver unload --- check the count of outstanding requests and call
> > > > yield() repeatedly until it goes to zero, then unload.
> > >
> > > Use a wakeup when the request count hits zero
> > >
> > > > * reduced size of data structures (and reduced cache footprint for the hot
> > > > path that actually processes requests)
> > >
> > > The CPU will predict the non-wakeup path if that is normal. You can even
> > > make the wakeup use something like
> > >
> > > if (exiting & count == 0)
> > >
> > > to get the prediction righ
> > >
> > > > The downside of yield is slower unloading of the driver by few tens of
> > > > miliseconds, but the user doesn't really care about fractions of a second
> > > > when unloading drivers.
> > >
> > > And more power usage, plus extremely rude behaviour when virtualising.
> >
> > How these unlikely cases can be rude?
> >
> > If I have a race condition that gets triggered just for one user in the
> > world when repeatedly loading & unloading a driver for an hour, and I use
> > yield() to solve it, what's wrong with it? A wait queue increases cache
> > footprint for every user. (even if I use preallocated hashed wait queue,
> > it still eats a cacheline to access it and find out that it's empty)
>
> Depending on the situation, yield() might be a NOP and therefore not
> wait at all and possibly lock up the machine.
>
> Consider the task in question to be the highest priority RT task on the
> system, you doing: while (!condition) yield(); will lock up the system,
> because whatever is to make condition true will never get a chance to
> run (not considering SMP).
>
> Clearly you don't understand it, please refrain from using it. Use
> regular condition variables (waitqueues).
So, use msleep(1) instead of yield() ?
Mikulas
> The rules about yield are:
>
> - You're likely wrong, don't use it.
> - Seriously, you don't need it.
> - If you still think you do, goto 1.
>
> In all of the kernel there is 1 valid use (and it might only be in the
> -rt kernel - didn't check mainline recently).
>
> The _ONLY_ valid use case of yield(), is if you have two equal priority
> FIFO threads that co-depend. And that situation is almost always
> avoidable.
>
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: Active waiting with yield()
2008-11-14 21:41 ` Mikulas Patocka
@ 2008-11-15 22:55 ` Peter Zijlstra
2008-11-17 17:39 ` Mikulas Patocka
0 siblings, 1 reply; 17+ messages in thread
From: Peter Zijlstra @ 2008-11-15 22:55 UTC (permalink / raw)
To: Mikulas Patocka
Cc: Alan Cox, linux-kernel, mingo, rml, Alasdair G Kergon, Milan Broz
On Fri, 2008-11-14 at 16:41 -0500, Mikulas Patocka wrote:
> So, use msleep(1) instead of yield() ?
wait_event() of course!
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: Active waiting with yield()
2008-11-15 22:55 ` Peter Zijlstra
@ 2008-11-17 17:39 ` Mikulas Patocka
2008-11-17 18:01 ` Alan Cox
0 siblings, 1 reply; 17+ messages in thread
From: Mikulas Patocka @ 2008-11-17 17:39 UTC (permalink / raw)
To: Peter Zijlstra
Cc: Alan Cox, linux-kernel, mingo, rml, Alasdair G Kergon, Milan Broz
On Sat, 15 Nov 2008, Peter Zijlstra wrote:
> On Fri, 2008-11-14 at 16:41 -0500, Mikulas Patocka wrote:
>
> > So, use msleep(1) instead of yield() ?
>
> wait_event() of course!
Why? Why developes don't like active waiting when terminating a driver?
Is there any real workload when 1ms delay on driver unload would be
problematic? Or don't they like it because of coding style?
L1 cache miss --- 10ns --- on every request, when using wait queues
msleep latency --- 1ms --- on driver termination, when using msleep
--- so if the driver processes more than 100000 requests between reboots,
wait queues actually slow things down.
Mikulas
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: Active waiting with yield()
2008-11-17 17:39 ` Mikulas Patocka
@ 2008-11-17 18:01 ` Alan Cox
2008-11-18 14:34 ` Mikulas Patocka
0 siblings, 1 reply; 17+ messages in thread
From: Alan Cox @ 2008-11-17 18:01 UTC (permalink / raw)
To: Mikulas Patocka
Cc: Peter Zijlstra, linux-kernel, mingo, rml, Alasdair G Kergon,
Milan Broz
> --- so if the driver processes more than 100000 requests between reboots,
> wait queues actually slow things down.
Versus power consumption and virtualisation considerations. Plus your
numbers are wrong. You seem terribly keen to ignore the fact that the
true cost is a predicted branch and usually a predicted branch of a
cached variable and you'll only touch the wait queue in rare cases.
I'd also note as an aside modern drivers usually run off krefs so
destruction and thus closedown is refcounted and comes off the last kref
destruct.
Alan
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: Active waiting with yield()
2008-11-17 18:01 ` Alan Cox
@ 2008-11-18 14:34 ` Mikulas Patocka
2008-11-18 14:40 ` Alan Cox
2008-11-18 14:49 ` Arjan van de Ven
0 siblings, 2 replies; 17+ messages in thread
From: Mikulas Patocka @ 2008-11-18 14:34 UTC (permalink / raw)
To: Alan Cox
Cc: Peter Zijlstra, linux-kernel, mingo, rml, Alasdair G Kergon,
Milan Broz
On Mon, 17 Nov 2008, Alan Cox wrote:
> > --- so if the driver processes more than 100000 requests between reboots,
> > wait queues actually slow things down.
>
> Versus power consumption and virtualisation considerations. Plus your
> numbers are wrong. You seem terribly keen to ignore the fact that the
> true cost is a predicted branch and usually a predicted branch of a
> cached variable and you'll only touch the wait queue in rare cases.
You will touch the wait queue always when finishing the last pending
request --- just to find out that there is no one waiting on it.
And besides cache line, there is coding and testing overhead with wait
queues. If the programmer forgets to decrement the number of pending
requests, he finds it out pretty quickly (the driver won't unload). If he
forgets to wake the queue up, the code can run with this bug long without
anyone noticing it --- unless someone tries to unload the driver at a
specific point --- I have seen this too.
> I'd also note as an aside modern drivers usually run off krefs so
> destruction and thus closedown is refcounted and comes off the last kref
> destruct.
>
> Alan
So what are the reasons why you (and others) are against active waiting?
All you are saying is that my reasons are wrong, but you haven't single
example when active waiting causes trouble. If there is a workload when
waiting 1ms-to-10ms with mdelay(1) on driver unload would cause discomfort
to the user, describe it.
Mikulas
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: Active waiting with yield()
2008-11-18 14:34 ` Mikulas Patocka
@ 2008-11-18 14:40 ` Alan Cox
2008-11-18 17:11 ` Mikulas Patocka
2008-11-18 14:49 ` Arjan van de Ven
1 sibling, 1 reply; 17+ messages in thread
From: Alan Cox @ 2008-11-18 14:40 UTC (permalink / raw)
To: Mikulas Patocka
Cc: Peter Zijlstra, linux-kernel, mingo, rml, Alasdair G Kergon,
Milan Broz
> You will touch the wait queue always when finishing the last pending
> request --- just to find out that there is no one waiting on it.
Why ? If my fast path for this something like
if (unlikely(foo->unload_pending) && count == 0)
wake_up(..)
chances are that I can put unload_pending somewhere in a cache line with
other stuff (certainly for L2) and it will get predicted well.
> And besides cache line, there is coding and testing overhead with wait
> queues.
If you want correctness you don't want busy waiting with yields - as Ingo
pointed out already thats a bug in itself with hard realtime.
> So what are the reasons why you (and others) are against active waiting?
> All you are saying is that my reasons are wrong, but you haven't single
> example when active waiting causes trouble. If there is a workload when
Untrue - I've given very clear reasons twice, and Ingo has given you
more. If you don't wish to accept the answers that is your problem not
mine, just expect such hacks to get a NAK and not make the kernel.
Alan
--
"I can only provide the information, I can't make you hear it."
- Shelley Bainter
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: Active waiting with yield()
2008-11-18 14:40 ` Alan Cox
@ 2008-11-18 17:11 ` Mikulas Patocka
2008-11-18 21:26 ` Alan Cox
2008-11-19 16:14 ` Peter Zijlstra
0 siblings, 2 replies; 17+ messages in thread
From: Mikulas Patocka @ 2008-11-18 17:11 UTC (permalink / raw)
To: Alan Cox
Cc: Peter Zijlstra, linux-kernel, mingo, rml, Alasdair G Kergon,
Milan Broz
On Tue, 18 Nov 2008, Alan Cox wrote:
> > You will touch the wait queue always when finishing the last pending
> > request --- just to find out that there is no one waiting on it.
>
> Why ? If my fast path for this something like
>
> if (unlikely(foo->unload_pending) && count == 0)
> wake_up(..)
>
> chances are that I can put unload_pending somewhere in a cache line with
> other stuff (certainly for L2) and it will get predicted well.
This makes a code branch that is very rarely tested and a potential bug.
Every such rarely executed branch is a danger and even a silly typo in the
code can hide there for many years without being noticed.
These rare branches are inevitable at some places, I just don't see a
reason why make deliberately more of them.
> > And besides cache line, there is coding and testing overhead with wait
> > queues.
>
> If you want correctness you don't want busy waiting with yields - as Ingo
> pointed out already thats a bug in itself with hard realtime.
So, I say msleep(1) instead of yield(). What are the counterarguments to
msleep?
> > So what are the reasons why you (and others) are against active waiting?
> > All you are saying is that my reasons are wrong, but you haven't single
> > example when active waiting causes trouble. If there is a workload when
>
> Untrue - I've given very clear reasons twice, and Ingo has given you
> more. If you don't wish to accept the answers that is your problem not
> mine, just expect such hacks to get a NAK and not make the kernel.
If it is only a matter of coding style and there is no valid reason why
msleep(1) when unloading the driver would hurt. Just say it "we have this
coding policy, we want the code to look like a state machine independent
on the timer".
But don't invent pseudoreasons, such that: the system boots 30 seconds,
shuts down in 5 seconds, and that 1ms-10ms sleep on shutdown is bad for
virtualization and power consumption and performance and whatever --- even
if it actually happens only to one user in a few years.
Mikulas
> Alan
> --
> "I can only provide the information, I can't make you hear it."
> - Shelley Bainter
>
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: Active waiting with yield()
2008-11-18 17:11 ` Mikulas Patocka
@ 2008-11-18 21:26 ` Alan Cox
2008-11-20 9:18 ` Mikulas Patocka
2008-11-19 16:14 ` Peter Zijlstra
1 sibling, 1 reply; 17+ messages in thread
From: Alan Cox @ 2008-11-18 21:26 UTC (permalink / raw)
To: Mikulas Patocka
Cc: Peter Zijlstra, linux-kernel, mingo, rml, Alasdair G Kergon,
Milan Broz
> This makes a code branch that is very rarely tested and a potential bug.
> Every such rarely executed branch is a danger and even a silly typo in the
> code can hide there for many years without being noticed.
Learn to use a debugger. You want an unusual timing to occur you
breakpoint the relevant task and suspend it for a bit.
> So, I say msleep(1) instead of yield(). What are the counterarguments to
> msleep?
msleep isn't particularly a problem. You are giving up the CPU and not
wasting so much power and you won't deadlock in realtime. Assuming you
only expect one or two msleep cycles its fine.
And if you think virtualisation and power management and correctness
(as Ingo noted) are a "bad reason" you need to wake up to the real world.
Alan
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: Active waiting with yield()
2008-11-18 21:26 ` Alan Cox
@ 2008-11-20 9:18 ` Mikulas Patocka
0 siblings, 0 replies; 17+ messages in thread
From: Mikulas Patocka @ 2008-11-20 9:18 UTC (permalink / raw)
To: Alan Cox
Cc: Peter Zijlstra, linux-kernel, mingo, rml, Alasdair G Kergon,
Milan Broz
On Tue, 18 Nov 2008, Alan Cox wrote:
> > This makes a code branch that is very rarely tested and a potential bug.
> > Every such rarely executed branch is a danger and even a silly typo in the
> > code can hide there for many years without being noticed.
>
> Learn to use a debugger. You want an unusual timing to occur you
> breakpoint the relevant task and suspend it for a bit.
>
> > So, I say msleep(1) instead of yield(). What are the counterarguments to
> > msleep?
>
> msleep isn't particularly a problem. You are giving up the CPU and not
> wasting so much power and you won't deadlock in realtime. Assuming you
> only expect one or two msleep cycles its fine.
So msleep(1) should be OK. I can't think of a case when it could break.
> And if you think virtualisation and power management and correctness
> (as Ingo noted) are a "bad reason" you need to wake up to the real world.
If the involved cases are:
- a race condition that never happened to a user, only seen during
artifical testing
- a race condition that existed for 5 years and just one user hit it
--- then yes, considering power management is a bad reason.
Mikulas
> Alan
>
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: Active waiting with yield()
2008-11-18 17:11 ` Mikulas Patocka
2008-11-18 21:26 ` Alan Cox
@ 2008-11-19 16:14 ` Peter Zijlstra
1 sibling, 0 replies; 17+ messages in thread
From: Peter Zijlstra @ 2008-11-19 16:14 UTC (permalink / raw)
To: Mikulas Patocka
Cc: Alan Cox, linux-kernel, mingo, rml, Alasdair G Kergon, Milan Broz
On Tue, 2008-11-18 at 12:11 -0500, Mikulas Patocka wrote:
> On Tue, 18 Nov 2008, Alan Cox wrote:
>
> > > You will touch the wait queue always when finishing the last pending
> > > request --- just to find out that there is no one waiting on it.
> >
> > Why ? If my fast path for this something like
> >
> > if (unlikely(foo->unload_pending) && count == 0)
> > wake_up(..)
> >
> > chances are that I can put unload_pending somewhere in a cache line with
> > other stuff (certainly for L2) and it will get predicted well.
>
> This makes a code branch that is very rarely tested and a potential bug.
> Every such rarely executed branch is a danger and even a silly typo in the
> code can hide there for many years without being noticed.
If you cannot get these simple things right, and stubbornly refuse to
listen to people telling you to the kernel is not the place to cut
corners, perhaps you should not be doing kernel code.
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: Active waiting with yield()
2008-11-18 14:34 ` Mikulas Patocka
2008-11-18 14:40 ` Alan Cox
@ 2008-11-18 14:49 ` Arjan van de Ven
2008-11-18 17:14 ` Mikulas Patocka
1 sibling, 1 reply; 17+ messages in thread
From: Arjan van de Ven @ 2008-11-18 14:49 UTC (permalink / raw)
To: Mikulas Patocka
Cc: Alan Cox, Peter Zijlstra, linux-kernel, mingo, rml,
Alasdair G Kergon, Milan Broz
On Tue, 18 Nov 2008 09:34:16 -0500 (EST)
> So what are the reasons why you (and others) are against active
> waiting? All you are saying is that my reasons are wrong, but you
> haven't single example when active waiting causes trouble. If there
> is a workload when waiting 1ms-to-10ms with mdelay(1) on driver
> unload would cause discomfort to the user, describe it.
>
mdelay()
* costs you quite a bit of power
* will cause your cpu to go to full speed
* makes it more likely that your fan goes on
* takes away CPU time from others who do want to run
- including the guy you are waiting for!
* if you do it with interrupts off you can even cause time skew
* adds 10 milliseconds of latency to the entire system, which is very
user noticable in a desktop environment (the threshold for that is
like 1 or 2 milliseconds total)
now there are some cases, mostly during error recovery or driver init
slowpaths where mdelay() can be justified, but "I'm too lazy to use a
waitqueue or other sleeping construct" is not one of them.
--
Arjan van de Ven Intel Open Source Technology Centre
For development, discussion and tips for power savings,
visit http://www.lesswatts.org
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: Active waiting with yield()
2008-11-18 14:49 ` Arjan van de Ven
@ 2008-11-18 17:14 ` Mikulas Patocka
0 siblings, 0 replies; 17+ messages in thread
From: Mikulas Patocka @ 2008-11-18 17:14 UTC (permalink / raw)
To: Arjan van de Ven
Cc: Alan Cox, Peter Zijlstra, linux-kernel, mingo, rml,
Alasdair G Kergon, Milan Broz
On Tue, 18 Nov 2008, Arjan van de Ven wrote:
> On Tue, 18 Nov 2008 09:34:16 -0500 (EST)
> > So what are the reasons why you (and others) are against active
> > waiting? All you are saying is that my reasons are wrong, but you
> > haven't single example when active waiting causes trouble. If there
> > is a workload when waiting 1ms-to-10ms with mdelay(1) on driver
> > unload would cause discomfort to the user, describe it.
> >
>
> mdelay()
> * costs you quite a bit of power
> * will cause your cpu to go to full speed
> * makes it more likely that your fan goes on
> * takes away CPU time from others who do want to run
> - including the guy you are waiting for!
> * if you do it with interrupts off you can even cause time skew
> * adds 10 milliseconds of latency to the entire system, which is very
> user noticable in a desktop environment (the threshold for that is
> like 1 or 2 milliseconds total)
msleep(1) should be better, mdelay doesn't give other processes a chance
to run.
> now there are some cases, mostly during error recovery or driver init
> slowpaths where mdelay() can be justified, but "I'm too lazy to use a
> waitqueue or other sleeping construct" is not one of them.
That is exactly what my initial post was about. I agree that using polling
on normal request processing is stupid, but I don't see why some people
don't like msleep() it even in slow paths (such as driver unload).
Mikulas
> --
> Arjan van de Ven Intel Open Source Technology Centre
> For development, discussion and tips for power savings,
> visit http://www.lesswatts.org
>
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: Active waiting with yield()
2008-11-14 19:34 ` Mikulas Patocka
2008-11-14 20:57 ` Peter Zijlstra
@ 2008-11-14 21:21 ` Alan Cox
1 sibling, 0 replies; 17+ messages in thread
From: Alan Cox @ 2008-11-14 21:21 UTC (permalink / raw)
To: Mikulas Patocka; +Cc: linux-kernel, mingo, rml, Alasdair G Kergon, Milan Broz
> If I have a race condition that gets triggered just for one user in the
> world when repeatedly loading & unloading a driver for an hour, and I use
> yield() to solve it, what's wrong with it? A wait queue increases cache
> footprint for every user. (even if I use preallocated hashed wait queue,
> it still eats a cacheline to access it and find out that it's empty)
Reread what I wrote.
You don't need to use a hashed queue, you don't need to even reference
the queue in the normal case. You cost is one variable, which you can
probably sensibly locate, and a predicted jump.
For that you get proper sleeping behaviour, CPU used by other tasks or
guests and you don't hang on hard real time tasks.
^ permalink raw reply [flat|nested] 17+ messages in thread
end of thread, other threads:[~2008-11-20 9:18 UTC | newest]
Thread overview: 17+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2008-11-14 18:59 Active waiting with yield() Mikulas Patocka
2008-11-14 19:06 ` Alan Cox
2008-11-14 19:34 ` Mikulas Patocka
2008-11-14 20:57 ` Peter Zijlstra
2008-11-14 21:41 ` Mikulas Patocka
2008-11-15 22:55 ` Peter Zijlstra
2008-11-17 17:39 ` Mikulas Patocka
2008-11-17 18:01 ` Alan Cox
2008-11-18 14:34 ` Mikulas Patocka
2008-11-18 14:40 ` Alan Cox
2008-11-18 17:11 ` Mikulas Patocka
2008-11-18 21:26 ` Alan Cox
2008-11-20 9:18 ` Mikulas Patocka
2008-11-19 16:14 ` Peter Zijlstra
2008-11-18 14:49 ` Arjan van de Ven
2008-11-18 17:14 ` Mikulas Patocka
2008-11-14 21:21 ` Alan Cox
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox