From mboxrd@z Thu Jan 1 00:00:00 1970 From: michi1@michaelblizek.twilightparadox.com (michi1 at michaelblizek.twilightparadox.com) Date: Wed, 22 Apr 2015 18:49:13 +0200 Subject: wait queues semiphores kernel implementations In-Reply-To: <553784C4.60203@mrbrklyn.com> References: <55345527.2050402@mrbrklyn.com> <20150420152352.GA4333@grml> <55352BD5.9020506@mrbrklyn.com> <20150421150500.GA4412@grml> <553784C4.60203@mrbrklyn.com> Message-ID: <20150422164913.GA4470@grml> To: kernelnewbies@lists.kernelnewbies.org List-Id: kernelnewbies.lists.kernelnewbies.org Hi! On 07:23 Wed 22 Apr , Ruben Safir wrote: > Ruben QUOTED Previously: > > << Chapter 4 on the wait queue how it is implemented in the text > completely surprised me. > > He is recommending that you have to write your own wait queue entry > routine for every process? Isn't that reckless? > > He is suggesting > > DEFINE_WAIT(wait) //what IS wait EXACTLY in this context #define DEFINE_WAIT_FUNC(name, function) \ wait_queue_t name = { \ .private = current, \ .func = function, \ .task_list = LIST_HEAD_INIT((name).task_list), \ } #define DEFINE_WAIT(name) DEFINE_WAIT_FUNC(name, autoremove_wake_function) > add_wait_queue(q, &wait); // in the current kernel this invovled > // flag checking and a linked list > > while(!condition){ /* an event we are weighting for > prepare_to_wait(&q, &wait, TASK_INTERRUPTIBLE); > if(signal_pending(current)) > /* SIGNAl HANDLE */ > schedule(); > } > > finish_wait(&q, &wait); > > He also write how this proceeds to function and one part confuses me > > 5. When the taks awakens, it again checks whether the condition is > true. If it is, it exists the loop. Otherwise it again calls schedule. > > > This is not the order that it seems to follow according to the code. > > To me it looks like it should > 1 - creat2 the wait queue > 2 - adds &wait onto queue q > 3 checks if condition is true, if so, if not, enter a while loop > 4 prepare_to_wait which changes the status of our &wait to > TASK_INTERUPPABLE > 5 check for signals ... notice the process is still moving. Does it > stop and wait now? > 6 schedule itself on the runtime rbtree... which make NO sense unless > there was a stopage I didn't know about. > 7 check the condition again and repeat while look > 7a. if the loop ends fishish_waiting... take it off the queue. This is what wait_event_interruptable looks like: http://lxr.linux.no/linux+*/include/linux/wait.h#L390 Seems like prepare_to_wait is now called before checking the condition and add_wait_queue does not exist anymore. > Isn't this reckless to leave this to users to write the code. Your > begging for a race condition. I agree. This is why I would not recommend it unless you have a good reason to do so. ... > Minus the Semiphore, that sounds like what we are doing with the wait > list in the scheduler. But it looks like we are leaving it to the > user. Why? It is similar but oddly different so I'm trying to figure > out what is happening here. The concept behind a waitqueue is more not about counting up+down. Basically when you call wait_event_* you define what you are waiting for. For example you have a socket and want to wait incoming data. Wheneven anything happens to the socket (e.g. data arrives, error, ...), somebody calls wake_up, your thread makes up, check if the condition is true and then wait_event_* either goes back to sleep or returns. The difference is that you can have situations where wait_event_* returns without anybody even having called wake_up. Also you can have situations with lots of calls to wake_up, but wait_event_* always goes back to sleep because the events which happen do not cause your condition to become true. -Michi -- programing a layer 3+4 network protocol for mesh networks see http://michaelblizek.twilightparadox.com