* wait queues
@ 2015-04-19 10:20 Ruben Safir
0 siblings, 0 replies; 8+ messages in thread
From: Ruben Safir @ 2015-04-19 10:20 UTC (permalink / raw)
To: kernelnewbies
I'm not pouring over Love's book in detail and the section in Chapter 4
on the wit queue is implemented in the text completely surprised me.
He is recommending that you have to right your own wait queue entry
routine for every process? Isn't that reckless?
He is suggesting
DEFINE_WAIT(wait) //what IS wait
add_wait_queue(q, &wait); // in the current kernel this invovled
// flag checking and a linked list
while(!condition){ /* an event we are weighting for
prepare_to_wait(&q, &wait, TASK_INTERRUPTIBLE);
if(signal_pending(current))
/* SIGNAl HANDLE */
schedule();
}
finish_wait(&q, &wait);
He also write how this proceeds to function and one part confuses me
5. When the taks awakens, it again checks whether the condition is
true. If it is, it exists the loop. Otherwise it again calls schedule.
This is not the order that it seems to follow according to the code.
To me it looks like it should
1 - creat2 the wait queue
2 - adds &wait onto queue q
3 checks if condition is true, if so, if not, enter a while loop
4 prepare_to_wait which changes the status of our &wait to
TASK_INTERUPPABLE
5 check for signals ... notice the process is still moving. Does it
stop and wait now?
6 schedule itself on the runtime rbtree... which make NO sense unless
there was a stopage I didn't know about.
7 check the condition again and repeat while look
7a. if the loop ends fishish_waiting... take it off the queue.
Isn't this reckless to leave this to users to write the code. Your
begging for a race condition.
Ruben
^ permalink raw reply [flat|nested] 8+ messages in thread* wait queues
@ 2015-04-20 1:23 Ruben Safir
2015-04-20 1:48 ` Ruben Safir
` (2 more replies)
0 siblings, 3 replies; 8+ messages in thread
From: Ruben Safir @ 2015-04-20 1:23 UTC (permalink / raw)
To: kernelnewbies
I'm not pouring over Love's book in detail and the section in Chapter 4
on the wit queue is implemented in the text completely surprised me.
He is recommending that you have to right your own wait queue entry
routine for every process? Isn't that reckless?
He is suggesting
DEFINE_WAIT(wait) //what IS wait
add_wait_queue(q, &wait); // in the current kernel this invovled
// flag checking and a linked list
while(!condition){ /* an event we are weighting for
prepare_to_wait(&q, &wait, TASK_INTERRUPTIBLE);
if(signal_pending(current))
/* SIGNAl HANDLE */
schedule();
}
finish_wait(&q, &wait);
He also write how this proceeds to function and one part confuses me
5. When the taks awakens, it again checks whether the condition is
true. If it is, it exists the loop. Otherwise it again calls schedule.
This is not the order that it seems to follow according to the code.
To me it looks like it should
1 - creat2 the wait queue
2 - adds &wait onto queue q
3 checks if condition is true, if so, if not, enter a while loop
4 prepare_to_wait which changes the status of our &wait to
TASK_INTERUPPABLE
5 check for signals ... notice the process is still moving. Does it
stop and wait now?
6 schedule itself on the runtime rbtree... which make NO sense unless
there was a stopage I didn't know about.
7 check the condition again and repeat while look
7a. if the loop ends fishish_waiting... take it off the queue.
Isn't this reckless to leave this to users to write the code. Your
begging for a race condition.
Ruben
_______________________________________________
Kernelnewbies mailing list
Kernelnewbies at kernelnewbies.org
http://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies
^ permalink raw reply [flat|nested] 8+ messages in thread* wait queues
2015-04-20 1:23 Ruben Safir
@ 2015-04-20 1:48 ` Ruben Safir
2015-04-20 1:54 ` Fred Chou
2015-04-20 15:23 ` michi1 at michaelblizek.twilightparadox.com
2 siblings, 0 replies; 8+ messages in thread
From: Ruben Safir @ 2015-04-20 1:48 UTC (permalink / raw)
To: kernelnewbies
I assume this is a different wait then the one we covered in call for
concurrency.
On 04/19/2015 09:23 PM, Ruben Safir wrote:
> I'm not pouring over Love's book in detail and the section in Chapter 4
> on the wait queue is implemented in the text completely surprised me.
>
> He is recommending that you have to right your own wait queue entry
> routine for every process? Isn't that reckless?
>
> He is suggesting
>
> DEFINE_WAIT(wait) //what IS wait
>
> add_wait_queue(q, &wait); // in the current kernel this invovled
> // flag checking and a linked list
>
> while(!condition){ /* an event we are weighting for
> prepare_to_wait(&q, &wait, TASK_INTERRUPTIBLE);
> if(signal_pending(current))
> /* SIGNAl HANDLE */
> schedule();
> }
>
> finish_wait(&q, &wait);
>
> He also write how this proceeds to function and one part confuses me
>
> 5. When the task awakens, it again checks whether the condition is
> true. If it is, it exists the loop. Otherwise it again calls schedule.
>
>
> This is not the order that it seems to follow according to the code.
>
> To me it looks like it should
> 1 - create the wait queue
> 2 - adds &wait onto queue q
> 3 checks if condition is true, if not, enter a while loop
> 4 prepare_to_wait which changes the status of our &wait to
> TASK_INTERUPPABLE
see this here must mean that wait is something else?
> 5 check for signals ... notice the process is still moving. Does it
> stop and wait now?
> 6 schedule itself on the runtime rbtree... which make NO sense unless
> there was a stopage I didn't know about.
> 7 check the condition again and repeat while look
> 7a. if the loop ends fishish_waiting... take it off the queue.
>
>
>
> Isn't this reckless to leave this to users to write the code. Your
> begging for a race condition.
>
> Ruben
>
> _______________________________________________
> Kernelnewbies mailing list
> Kernelnewbies at kernelnewbies.org
> http://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies
>
>
>
> _______________________________________________
> Kernelnewbies mailing list
> Kernelnewbies at kernelnewbies.org
> http://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies
>
>
^ permalink raw reply [flat|nested] 8+ messages in thread* wait queues
2015-04-20 1:23 Ruben Safir
2015-04-20 1:48 ` Ruben Safir
@ 2015-04-20 1:54 ` Fred Chou
2015-04-20 8:57 ` Ruben Safir
2015-04-20 15:23 ` michi1 at michaelblizek.twilightparadox.com
2 siblings, 1 reply; 8+ messages in thread
From: Fred Chou @ 2015-04-20 1:54 UTC (permalink / raw)
To: kernelnewbies
On 20/4/2015 9:23 AM, Ruben Safir wrote:
> I'm not pouring over Love's book in detail and the section in Chapter 4
> on the wit queue is implemented in the text completely surprised me.
>
> He is recommending that you have to right your own wait queue entry
> routine for every process? Isn't that reckless?
>
> He is suggesting
>
> DEFINE_WAIT(wait) //what IS wait
>
> add_wait_queue(q, &wait); // in the current kernel this invovled
> // flag checking and a linked list
>
> while(!condition){ /* an event we are weighting for
> prepare_to_wait(&q, &wait, TASK_INTERRUPTIBLE);
> if(signal_pending(current))
> /* SIGNAl HANDLE */
> schedule();
> }
>
> finish_wait(&q, &wait);
>
> He also write how this proceeds to function and one part confuses me
>
> 5. When the taks awakens, it again checks whether the condition is
> true. If it is, it exists the loop. Otherwise it again calls schedule.
>
>
> This is not the order that it seems to follow according to the code.
>
> To me it looks like it should
> 1 - creat2 the wait queue
> 2 - adds &wait onto queue q
> 3 checks if condition is true, if so, if not, enter a while loop
> 4 prepare_to_wait which changes the status of our &wait to
> TASK_INTERUPPABLE
> 5 check for signals ... notice the process is still moving. Does it
> stop and wait now?
> 6 schedule itself on the runtime rbtree... which make NO sense unless
> there was a stopage I didn't know about.
> 7 check the condition again and repeat while look
> 7a. if the loop ends fishish_waiting... take it off the queue.
>
Could this be a lost wake-up problem?
^ permalink raw reply [flat|nested] 8+ messages in thread* wait queues
2015-04-20 1:23 Ruben Safir
2015-04-20 1:48 ` Ruben Safir
2015-04-20 1:54 ` Fred Chou
@ 2015-04-20 15:23 ` michi1 at michaelblizek.twilightparadox.com
2015-04-20 16:39 ` Ruben Safir
2 siblings, 1 reply; 8+ messages in thread
From: michi1 at michaelblizek.twilightparadox.com @ 2015-04-20 15:23 UTC (permalink / raw)
To: kernelnewbies
Hi!
On 21:23 Sun 19 Apr , Ruben Safir wrote:
> I'm not pouring over Love's book in detail and the section in Chapter 4
> on the wit queue is implemented in the text completely surprised me.
>
> He is recommending that you have to right your own wait queue entry
> routine for every process? Isn't that reckless?
I would not recommend that. There are already functions in linux/wait.h for
these purposes like wait_event_interruptible().
-Michi
--
programing a layer 3+4 network protocol for mesh networks
see http://michaelblizek.twilightparadox.com
^ permalink raw reply [flat|nested] 8+ messages in thread
end of thread, other threads:[~2015-04-21 15:05 UTC | newest]
Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2015-04-19 10:20 wait queues Ruben Safir
-- strict thread matches above, loose matches on Subject: below --
2015-04-20 1:23 Ruben Safir
2015-04-20 1:48 ` Ruben Safir
2015-04-20 1:54 ` Fred Chou
2015-04-20 8:57 ` Ruben Safir
2015-04-20 15:23 ` michi1 at michaelblizek.twilightparadox.com
2015-04-20 16:39 ` Ruben Safir
2015-04-21 15:05 ` michi1 at michaelblizek.twilightparadox.com
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).