* Is it a workqueue related issue in 2.6.37 (Was: Re: [libvirt] blkio cgroup [solved]) [not found] ` <4D662248.6040405@in-telegence.net> @ 2011-02-24 14:23 ` Vivek Goyal 2011-02-24 14:31 ` Tejun Heo 0 siblings, 1 reply; 22+ messages in thread From: Vivek Goyal @ 2011-02-24 14:23 UTC (permalink / raw) To: linux kernel mailing list, Tejun Heo; +Cc: libvir-list, Dominik Klein On Thu, Feb 24, 2011 at 10:18:00AM +0100, Dominik Klein wrote: Hi Dominik, Thanks for the tests and reports. I checked the latest logs also and I see that cfq has scheduled a work but that work never gets scheduled. I never see the trace message which says cfq_kick_queue(). I am ccing it to lkml and tejun to see if he has any suggestions. Tejun, I will give you some details about what we have discussed so far. Dominik is trying blkio throttling feature and trying to throttle some virtual machines. He is using 2.6.37 kernels and once he launches 3 virtual machines he notices that system is kind of frozen. After running some traces we noticed that CFQ has requests but it is not dispatching these to devices any more. This problem does not show up with deadline scheduler and also goes away with 2.6.38-rc6 kernels. CFQ seems to have scheduled a work and that work never gets called back that's why CFQ is frozen. I modifed CFQ a bit and capture following traces. <...>-3118 [022] 230.001571: 8,16 m N cfq idle timer fired <...>-3118 [022] 230.001575: 8,16 m N cfq3834 slice expired t=0 <...>-3118 [022] 230.001577: 8,16 m N cfq3834 sl_used=3 disp=37 charge=3 iops=0 sect=16048 <...>-3118 [022] 230.001578: 8,16 m N cfq3834 del_from_rr <...>-3118 [022] 230.001581: 8,16 m N cfq schedule dispatch: busy_queues=29 rq_queued=128 rq_in_driver=0 q=ffff88080a581ba0 <CFQ hangs after this. No traces> I have put a trace message in cfq_kick_queue(). If work was ever scheduled that trace message should have been present here. Tejun, I was wondering if you have come across similar issue in the past which has been fixed in 2.6.38-rc kernels? Or any suggestions regarding how to debug it further. Thanks Vivek > > First of all, I need to define the "freeze"- or "hang"-situation in > order to avoid misunderstandings. When the "hang" state occures, this > means it is impossible to do any io on the system. vmstat shows 250-300 > blocked threads. Therefore, it is not possible to open new ssh > connections or log into the servers console. Established ssh connections > however, keep working. It is possible to run commands. Just don't touch > any files. That immediately hangs the connections. > > Okay, now that we got that cleared up ... > > During the "hang" situation, it is possible to change from cfq to > deadline scheduler for /dev/sdb (the pv for the lvs). This makes the io > happen and the system is responsive as usual. > > After applying the patch correctly (my mistake), we could see these > debug messages from debugfs: > > qemu-kvm-3955 [004] 823.360594: 8,16 Q WS 3408164104 + 480 > [qemu-kvm] > qemu-kvm-3955 [004] 823.360594: 8,16 S WS 3408164104 + 480 > [qemu-kvm] > <...>-3252 [016] 823.361146: 8,16 m N cfq idle timer fired > <...>-3252 [016] 823.361151: 8,16 m N cfq3683 slice expired t=0 > <...>-3252 [016] 823.361154: 8,16 m N cfq3683 sl_used=2 disp=25 > charge=2 iops=0 sect=9944 > <...>-3252 [016] 823.361154: 8,16 m N cfq3683 del_from_rr > <...>-3252 [016] 823.361161: 8,16 m N cfq schedule dispatch: > busy_queues=38 rq_queued=132 rq_in_driver=0 > > quote Vivek Goyal > cfq has 38 busy queues and 132 requests queued and it tries to schedule > a dispatch > and that dispatch never happens for some reason and cfq is hung > /quote > > So the next idea was to try with 2.6.38-rc6, just in case there is a bug > in workqueue logic which got fixed? > > And it turns out: With 2.6.38-rc6, the problem does not happen. > > I will see whether I can bisect the kernel patches and see which one was > the good one. I have to figure out a way to do that, but if I do it and > find out, I will keep you posted. > > Vivek then asked me to use another 2.6.37 debug patch and re-run the > test. Attached are the logs from that run. > > Regards > Dominik > > On 02/23/2011 02:37 PM, Dominik Klein wrote: > > Hi > > > > so I ran these tests again. No patch applied yet. And - at least once - > > it worked. I did everything exactly the same way as before. Since the > > logs are 8 MB, even when best-bzip2'd, and I don't want everybody to > > have to download these, I uploaded them to an external hoster: > > http://www.megaupload.com/?d=SWKTC0V4 > > > > Traces were created with > > blktrace -n 64 -b 16384 -d /dev/sdb -o - | blkparse -i - > > blktrace -n 64 -b 16384 -d /dev/vdisks/kernel3 -o - | blkparse -i - > > > >> Can you please apply attached patch. > > > > Unfortunately not. Cannot be applied to 2.6.37. I guess your source is > > newer and I fail to find the places in the file to patch manually. > > > >> This just makes CFQ output little > >> more verbose and run the test again and capture the trace. > >> > >> - Start the trace on /dev/sdb > >> - Start the dd jobs in virt machines > >> - Wait for system to hang > >> - Press CTRL-C > >> - Make sure there were no lost events otherwise increase the size and > >> number of buffers. > > > > Tried that. Unfortunately, even with max buffer size of 16 M [1], this > > leaves some Skips. I also tried to increase the number of buffers over > > 64, but that produced Oops'es. > > > > However, I attached kernel3's blktrace of a case where the error > > occured. Maybe you can read something from that. > > > >> Can you also open tracing in another window and also trace one of the > >> throttled dm deivces, say /dev/disks/kernel3. Following the same procedure > >> as above. So let the two traces run in parallel. > > > > So what next? > > > > Regards > > Dominik > > > > [1] > > http://git.kernel.org/?p=linux/kernel/git/axboe/blktrace.git;a=blob_plain;f=blktrace.c;hb=HEAD > > and look for "Invalid buffer" > vmstat 2 > 0 0 0 130316736 16364 74840 0 0 0 0 16891 33021 0 1 99 0 > 8 0 0 129400944 16364 74840 0 0 0 14 17975 33233 4 3 93 0 > 0 306 0 127762016 16364 74840 0 0 0 144642 20478 35476 11 6 28 56 > 1 306 0 127761176 16364 74840 0 0 0 0 17307 33100 3 4 25 69 > 2 306 0 127760904 16364 74840 0 0 0 0 17300 32947 3 4 24 69 > 0 306 0 127760768 16364 74840 0 0 0 0 17189 32953 2 3 25 70 > 1 306 0 127760752 16364 74840 0 0 0 0 17228 32966 2 3 21 74 > 0 306 0 127760752 16364 74840 0 0 0 0 17138 32938 1 4 24 72 > 0 306 0 127760736 16364 74840 0 0 0 0 17159 32954 1 4 22 73 > 1 306 0 127760736 16364 74840 0 0 0 0 17210 33011 1 4 23 73 > 0 306 0 127760736 16364 74840 0 0 0 0 17167 32980 0 4 22 74 > 0 306 0 127760736 16364 74840 0 0 0 0 17137 32930 0 4 21 75 > 1 306 0 127760352 16364 74840 0 0 0 0 17165 32933 0 2 23 75 > 1 306 0 127760472 16364 74840 0 0 0 0 17079 32928 0 0 25 74 > 0 306 0 127760560 16364 74840 0 0 0 0 17188 32950 1 2 22 75 > 2 306 0 127760560 16364 74840 0 0 0 6 17142 32838 1 4 21 74 > 0 306 0 127760560 16364 74840 0 0 0 16 17213 32896 1 4 21 74 > 0 306 0 127760560 16364 74840 0 0 0 0 17198 32899 2 4 21 74 > 0 306 0 127760560 16364 74840 0 0 0 12 17218 32910 1 4 21 74 > 1 306 0 127760544 16364 74840 0 0 0 0 17203 32884 2 3 21 74 > 1 306 0 127760552 16364 74840 0 0 0 0 17199 32904 2 4 21 74 > 0 306 0 127760560 16364 74840 0 0 0 0 17230 32871 2 4 21 74 > 0 306 0 127760560 16364 74840 0 0 0 0 17177 32860 2 4 21 74 > 0 306 0 127760640 16364 74840 0 0 0 0 17185 32888 1 4 21 73 > 0 306 0 127760640 16364 74840 0 0 0 0 17207 32851 2 4 25 70 > 1 306 0 127760640 16364 74840 0 0 0 0 17023 32914 0 4 25 70 > block/blk-core.c | 1 + > block/blk.h | 8 +++++++- > block/cfq-iosched.c | 11 ++++++++--- > 3 files changed, 16 insertions(+), 4 deletions(-) > > Index: linux-2.6/block/cfq-iosched.c > =================================================================== > --- linux-2.6.orig/block/cfq-iosched.c 2011-02-22 13:23:25.000000000 -0500 > +++ linux-2.6/block/cfq-iosched.c 2011-02-23 11:54:49.488446104 -0500 > @@ -498,7 +498,7 @@ static inline bool cfq_bio_sync(struct b > static inline void cfq_schedule_dispatch(struct cfq_data *cfqd) > { > if (cfqd->busy_queues) { > - cfq_log(cfqd, "schedule dispatch"); > + cfq_log(cfqd, "schedule dispatch: busy_queues=%d rq_queued=%d rq_in_driver=%d q=%p", cfqd->busy_queues, cfqd->rq_queued, cfqd->rq_in_driver, cfqd->queue); > kblockd_schedule_work(cfqd->queue, &cfqd->unplug_work); > } > } > @@ -2229,6 +2229,8 @@ static struct cfq_queue *cfq_select_queu > { > struct cfq_queue *cfqq, *new_cfqq = NULL; > > + cfq_log(cfqd, "select_queue: busy_queues=%d rq_queued=%d rq_in_driver=%d", cfqd->busy_queues, cfqd->rq_queued, cfqd->rq_in_driver); > + > cfqq = cfqd->active_queue; > if (!cfqq) > goto new_queue; > @@ -2499,8 +2501,10 @@ static int cfq_dispatch_requests(struct > return cfq_forced_dispatch(cfqd); > > cfqq = cfq_select_queue(cfqd); > - if (!cfqq) > + if (!cfqq) { > + cfq_log(cfqd, "select: no cfqq selected"); > return 0; > + } > > /* > * Dispatch a request from this cfqq, if it is allowed > @@ -3359,7 +3363,7 @@ static void cfq_insert_request(struct re > struct cfq_data *cfqd = q->elevator->elevator_data; > struct cfq_queue *cfqq = RQ_CFQQ(rq); > > - cfq_log_cfqq(cfqd, cfqq, "insert_request"); > + cfq_log_cfqq(cfqd, cfqq, "insert_request: busy_queues=%d rq_queued=%d rq_in_driver=%d", cfqd->busy_queues, cfqd->rq_queued, cfqd->rq_in_driver); > cfq_init_prio_data(cfqq, RQ_CIC(rq)->ioc); > > rq_set_fifo_time(rq, jiffies + cfqd->cfq_fifo_expire[rq_is_sync(rq)]); > @@ -3707,6 +3711,7 @@ static void cfq_kick_queue(struct work_s > struct request_queue *q = cfqd->queue; > > spin_lock_irq(q->queue_lock); > + cfq_log(cfqd, "cfq_kick_queue called. busy_queues=%d rq_queued=%d rq_in_driver=%d", cfqd->busy_queues, cfqd->rq_queued, cfqd->rq_in_driver); > __blk_run_queue(cfqd->queue); > spin_unlock_irq(q->queue_lock); > } > Index: linux-2.6/block/blk-core.c > =================================================================== > --- linux-2.6.orig/block/blk-core.c 2011-02-22 13:23:25.000000000 -0500 > +++ linux-2.6/block/blk-core.c 2011-02-23 11:50:34.406700216 -0500 > @@ -413,6 +413,7 @@ void __blk_run_queue(struct request_queu > queue_flag_clear(QUEUE_FLAG_REENTER, q); > } else { > queue_flag_set(QUEUE_FLAG_PLUGGED, q); > + trace_printk("not recursing. Scheduling another dispatch\n"); > kblockd_schedule_work(q, &q->unplug_work); > } > } > Index: linux-2.6/block/blk.h > =================================================================== > --- linux-2.6.orig/block/blk.h 2011-02-18 13:39:47.000000000 -0500 > +++ linux-2.6/block/blk.h 2011-02-23 11:53:37.844304978 -0500 > @@ -57,6 +57,8 @@ static inline struct request *__elv_next > { > struct request *rq; > > + trace_printk("elv_next_request called. q=%p\n", q); > + > while (1) { > while (!list_empty(&q->queue_head)) { > rq = list_entry_rq(q->queue_head.next); > @@ -68,8 +70,12 @@ static inline struct request *__elv_next > return rq; > } > > - if (!q->elevator->ops->elevator_dispatch_fn(q, 0)) > + trace_printk("No requests in q->queue_head. Will call elevator. q=%p\n", q); > + > + if (!q->elevator->ops->elevator_dispatch_fn(q, 0)) { > + trace_printk("No requests in elevator either q=%p\n", q); > return NULL; > + } > } > } > ^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: Is it a workqueue related issue in 2.6.37 (Was: Re: [libvirt] blkio cgroup [solved]) 2011-02-24 14:23 ` Is it a workqueue related issue in 2.6.37 (Was: Re: [libvirt] blkio cgroup [solved]) Vivek Goyal @ 2011-02-24 14:31 ` Tejun Heo 2011-02-24 14:58 ` Dominik Klein 0 siblings, 1 reply; 22+ messages in thread From: Tejun Heo @ 2011-02-24 14:31 UTC (permalink / raw) To: Vivek Goyal; +Cc: linux kernel mailing list, libvir-list, Dominik Klein Hello, On Thu, Feb 24, 2011 at 09:23:03AM -0500, Vivek Goyal wrote: > On Thu, Feb 24, 2011 at 10:18:00AM +0100, Dominik Klein wrote: > > Hi Dominik, > > Thanks for the tests and reports. I checked the latest logs also and > I see that cfq has scheduled a work but that work never gets scheduled. > I never see the trace message which says cfq_kick_queue(). > > I am ccing it to lkml and tejun to see if he has any suggestions. > > Tejun, > > I will give you some details about what we have discussed so far. > > Dominik is trying blkio throttling feature and trying to throttle some > virtual machines. He is using 2.6.37 kernels and once he launches 3 > virtual machines he notices that system is kind of frozen. After running > some traces we noticed that CFQ has requests but it is not dispatching > these to devices any more. > > This problem does not show up with deadline scheduler and also goes away > with 2.6.38-rc6 kernels. Hmmm... Maybe the following commit? commit 7576958a9d5a4a677ad7dd40901cdbb6c1110c98 Author: Tejun Heo <tj@kernel.org> Date: Mon Feb 14 14:04:46 2011 +0100 workqueue: wake up a worker when a rescuer is leaving a gcwq After executing the matching works, a rescuer leaves the gcwq whether there are more pending works or not. This may decrease the concurrency level to zero and stall execution until a new work item is queued on the gcwq. Make rescuer wake up a regular worker when it leaves a gcwq if there are more works to execute, so that execution isn't stalled. Signed-off-by: Tejun Heo <tj@kernel.org> Reported-by: Ray Jui <rjui@broadcom.com> Cc: stable@kernel.org -- tejun ^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: Is it a workqueue related issue in 2.6.37 (Was: Re: [libvirt] blkio cgroup [solved]) 2011-02-24 14:31 ` Tejun Heo @ 2011-02-24 14:58 ` Dominik Klein 2011-02-24 15:17 ` Tejun Heo 0 siblings, 1 reply; 22+ messages in thread From: Dominik Klein @ 2011-02-24 14:58 UTC (permalink / raw) To: Tejun Heo; +Cc: Vivek Goyal, linux kernel mailing list, libvir-list Hi > Hmmm... Maybe the following commit? > > commit 7576958a9d5a4a677ad7dd40901cdbb6c1110c98 Unfortunately not. This does not fix it for me. Vivek introduced me to git bisect and I will try to do that tomorrow to find the changeset. Will report back. Thanks Dominik ^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: Is it a workqueue related issue in 2.6.37 (Was: Re: [libvirt] blkio cgroup [solved]) 2011-02-24 14:58 ` Dominik Klein @ 2011-02-24 15:17 ` Tejun Heo 2011-02-25 7:24 ` Dominik Klein 0 siblings, 1 reply; 22+ messages in thread From: Tejun Heo @ 2011-02-24 15:17 UTC (permalink / raw) To: Dominik Klein; +Cc: Vivek Goyal, linux kernel mailing list, libvir-list On Thu, Feb 24, 2011 at 03:58:22PM +0100, Dominik Klein wrote: > Hi > > > Hmmm... Maybe the following commit? > > > > commit 7576958a9d5a4a677ad7dd40901cdbb6c1110c98 > > Unfortunately not. This does not fix it for me. > > Vivek introduced me to git bisect and I will try to do that tomorrow to > find the changeset. Maybe watching what the workqueue is doing using the following trace events could be helpful. # grep workqueue /sys/kernel/debug/tracing/available_events workqueue:workqueue_insertion workqueue:workqueue_execution workqueue:workqueue_creation workqueue:workqueue_destruction Thanks. -- tejun ^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: Is it a workqueue related issue in 2.6.37 (Was: Re: [libvirt] blkio cgroup [solved]) 2011-02-24 15:17 ` Tejun Heo @ 2011-02-25 7:24 ` Dominik Klein 2011-02-25 11:29 ` Tejun Heo 2011-02-25 19:53 ` Steven Rostedt 0 siblings, 2 replies; 22+ messages in thread From: Dominik Klein @ 2011-02-25 7:24 UTC (permalink / raw) To: Tejun Heo; +Cc: Vivek Goyal, linux kernel mailing list, libvir-list [-- Attachment #1: Type: text/plain, Size: 1073 bytes --] > Maybe watching what the workqueue is doing using the following trace > events could be helpful. > > # grep workqueue /sys/kernel/debug/tracing/available_events > workqueue:workqueue_insertion > workqueue:workqueue_execution > workqueue:workqueue_creation > workqueue:workqueue_destruction Since I've never done this before, I will tell you how I captured the trace so you know what I did and can see whether I did something wrong and can correct me if necessary. echo blk > /sys/kernel/debug/tracing/current_tracer echo 1 > /sys/block/sdb/trace/enable echo workqueue_queue_work >> /sys/kernel/debug/tracing/set_event echo workqueue_activate_work >> /sys/kernel/debug/tracing/set_event echo workqueue_execute_start >> /sys/kernel/debug/tracing/set_event echo workqueue_execute_end >> /sys/kernel/debug/tracing/set_event cat /sys/kernel/debug/tracing/trace_pipe And that output i gzip'd. This was taken with 2.6.37 plus the patch you mentioned on a Dell R815 with 2 12 core AMD Opteron 6174 CPUs. If you need any more information, please let me know. Regards Dominik [-- Attachment #2: trace_pipe.gz --] [-- Type: application/gzip, Size: 114289 bytes --] ^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: Is it a workqueue related issue in 2.6.37 (Was: Re: [libvirt] blkio cgroup [solved]) 2011-02-25 7:24 ` Dominik Klein @ 2011-02-25 11:29 ` Tejun Heo 2011-02-25 11:46 ` Dominik Klein 2011-02-25 19:53 ` Steven Rostedt 1 sibling, 1 reply; 22+ messages in thread From: Tejun Heo @ 2011-02-25 11:29 UTC (permalink / raw) To: Dominik Klein; +Cc: Vivek Goyal, linux kernel mailing list, libvir-list On Fri, Feb 25, 2011 at 08:24:15AM +0100, Dominik Klein wrote: > > Maybe watching what the workqueue is doing using the following trace > > events could be helpful. > > > > # grep workqueue /sys/kernel/debug/tracing/available_events > > workqueue:workqueue_insertion > > workqueue:workqueue_execution > > workqueue:workqueue_creation > > workqueue:workqueue_destruction > > Since I've never done this before, I will tell you how I captured the > trace so you know what I did and can see whether I did something wrong > and can correct me if necessary. > > echo blk > /sys/kernel/debug/tracing/current_tracer > echo 1 > /sys/block/sdb/trace/enable > echo workqueue_queue_work >> /sys/kernel/debug/tracing/set_event > echo workqueue_activate_work >> /sys/kernel/debug/tracing/set_event > echo workqueue_execute_start >> /sys/kernel/debug/tracing/set_event > echo workqueue_execute_end >> /sys/kernel/debug/tracing/set_event > cat /sys/kernel/debug/tracing/trace_pipe > > And that output i gzip'd. > > This was taken with 2.6.37 plus the patch you mentioned on a Dell R815 > with 2 12 core AMD Opteron 6174 CPUs. If you need any more information, > please let me know. Hmmm... well, I have no idea what you were trying to do but here are some info which might be helpful. * queue_work happens when the work item is queued. * activate_work happens when the work item becomes eligible for execution. e.g. If the workqueue's @max_active is limited and maximum number of work items are already in flight, a new item will only get activated after one of the in flight ones retires. * execute_start marks the actual starting of execution. * execute_end marks the end of execution. So, I would look for the matching work function and then try to follow what happens to it after being scheduled and if it doesn't get executed what's going on with the target workqueue. Thanks. -- tejun ^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: Is it a workqueue related issue in 2.6.37 (Was: Re: [libvirt] blkio cgroup [solved]) 2011-02-25 11:29 ` Tejun Heo @ 2011-02-25 11:46 ` Dominik Klein 2011-02-25 13:18 ` Tejun Heo 0 siblings, 1 reply; 22+ messages in thread From: Dominik Klein @ 2011-02-25 11:46 UTC (permalink / raw) To: Tejun Heo; +Cc: Vivek Goyal, linux kernel mailing list, libvir-list Hi >> This was taken with 2.6.37 plus the patch you mentioned on a Dell R815 >> with 2 12 core AMD Opteron 6174 CPUs. If you need any more information, >> please let me know. > > Hmmm... well, I have no idea what you were trying to do Long story short: I have a couple of virtual machines. Some of them have blkio throttle configured, some don't. To simulate whether the throttling works, I start dd if=/dev/zero of=testfile bs=1M count=1500 in each guest simultaneously. The result is that from that point, no i/o is happening any more. You see the result in the trace. With 2.6.37 (also tried .1 and .2) it does not work but end up like I documented. With 2.6.38-rc1, it does work. With deadline scheduler, it also works in 2.6.37. I am in bisect run 2 currently to find the changeset that fixed it. Will let you know as soon as I do. Regards Dominik ps. I am by no means a kernel hacker and none of the rest of your email made any sense to me. Sorry. > but here are > some info which might be helpful. > > * queue_work happens when the work item is queued. > > * activate_work happens when the work item becomes eligible for > execution. e.g. If the workqueue's @max_active is limited and > maximum number of work items are already in flight, a new item will > only get activated after one of the in flight ones retires. > > * execute_start marks the actual starting of execution. > > * execute_end marks the end of execution. > > So, I would look for the matching work function and then try to follow > what happens to it after being scheduled and if it doesn't get > executed what's going on with the target workqueue. > > Thanks. ^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: Is it a workqueue related issue in 2.6.37 (Was: Re: [libvirt] blkio cgroup [solved]) 2011-02-25 11:46 ` Dominik Klein @ 2011-02-25 13:18 ` Tejun Heo 2011-02-25 14:41 ` Dominik Klein 2011-02-25 14:57 ` Vivek Goyal 0 siblings, 2 replies; 22+ messages in thread From: Tejun Heo @ 2011-02-25 13:18 UTC (permalink / raw) To: Dominik Klein; +Cc: Vivek Goyal, linux kernel mailing list, libvir-list Hello, On Fri, Feb 25, 2011 at 12:46:16PM +0100, Dominik Klein wrote: > With 2.6.37 (also tried .1 and .2) it does not work but end up like I > documented. With 2.6.38-rc1, it does work. With deadline scheduler, it > also works in 2.6.37. Okay, here's the problematic part. <idle>-0 [013] 1640.975562: workqueue_queue_work: work struct=ffff88080f14f270 function=blk_throtl_work workqueue=ffff88102c8fc700 req_cpu=13 cpu=13 <idle>-0 [013] 1640.975564: workqueue_activate_work: work struct ffff88080f14f270 <...>-477 [013] 1640.975574: workqueue_execute_start: work struct ffff88080f14f270: function blk_throtl_work <idle>-0 [013] 1641.087450: workqueue_queue_work: work struct=ffff88080f14f270 function=blk_throtl_work workqueue=ffff88102c8fc700 req_cpu=13 cpu=13 The workqueue is per-cpu, so we only need to follow cpu=13 cases. @1640, blk_throtl_work() is queued, activated and starts executing but never finishes. The same work item is never executed more than once at the same on the same CPU, so when the next work item is queued, it doesn't get activated until the previous execution is complete. The next thing to do would be finding out why blk_throtl_work() isn't finishing. sysrq-t or /proc/PID/stack should show us where it's stalled. Thanks. -- tejun ^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: Is it a workqueue related issue in 2.6.37 (Was: Re: [libvirt] blkio cgroup [solved]) 2011-02-25 13:18 ` Tejun Heo @ 2011-02-25 14:41 ` Dominik Klein 2011-02-25 14:55 ` Tejun Heo 2011-02-25 14:57 ` Vivek Goyal 1 sibling, 1 reply; 22+ messages in thread From: Dominik Klein @ 2011-02-25 14:41 UTC (permalink / raw) To: Tejun Heo; +Cc: Vivek Goyal, linux kernel mailing list, libvir-list [-- Attachment #1: Type: text/plain, Size: 1975 bytes --] On 02/25/2011 02:18 PM, Tejun Heo wrote: > Hello, > > On Fri, Feb 25, 2011 at 12:46:16PM +0100, Dominik Klein wrote: >> With 2.6.37 (also tried .1 and .2) it does not work but end up like I >> documented. With 2.6.38-rc1, it does work. With deadline scheduler, it >> also works in 2.6.37. > > Okay, here's the problematic part. > > <idle>-0 [013] 1640.975562: workqueue_queue_work: work struct=ffff88080f14f270 function=blk_throtl_work workqueue=ffff88102c8fc700 req_cpu=13 cpu=13 > <idle>-0 [013] 1640.975564: workqueue_activate_work: work struct ffff88080f14f270 > <...>-477 [013] 1640.975574: workqueue_execute_start: work struct ffff88080f14f270: function blk_throtl_work > <idle>-0 [013] 1641.087450: workqueue_queue_work: work struct=ffff88080f14f270 function=blk_throtl_work workqueue=ffff88102c8fc700 req_cpu=13 cpu=13 > > The workqueue is per-cpu, so we only need to follow cpu=13 cases. > @1640, blk_throtl_work() is queued, activated and starts executing but > never finishes. The same work item is never executed more than once > at the same on the same CPU, so when the next work item is queued, it > doesn't get activated until the previous execution is complete. > > The next thing to do would be finding out why blk_throtl_work() isn't > finishing. sysrq-t or /proc/PID/stack should show us where it's > stalled. > > Thanks. > See attached logs of another run. sysctl -w kernel.sysrq=1 echo blk > /sys/kernel/debug/tracing/current_tracer echo 1 > /sys/block/sdb/trace/enable echo workqueue_queue_work >> /sys/kernel/debug/tracing/set_event echo workqueue_activate_work >> /sys/kernel/debug/tracing/set_event echo workqueue_execute_start >> /sys/kernel/debug/tracing/set_event echo workqueue_execute_end >> /sys/kernel/debug/tracing/set_event That makes attachment trace_pipe5.gz echo 8 > /proc/sysrq-trigger echo t > /proc/sysrq-trigger That makes attachment console.gz hth Dominik [-- Attachment #2: console.gz --] [-- Type: application/gzip, Size: 45434 bytes --] [-- Attachment #3: trace_pipe5.gz --] [-- Type: application/gzip, Size: 100324 bytes --] ^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: Is it a workqueue related issue in 2.6.37 (Was: Re: [libvirt] blkio cgroup [solved]) 2011-02-25 14:41 ` Dominik Klein @ 2011-02-25 14:55 ` Tejun Heo 0 siblings, 0 replies; 22+ messages in thread From: Tejun Heo @ 2011-02-25 14:55 UTC (permalink / raw) To: Dominik Klein; +Cc: Vivek Goyal, linux kernel mailing list, libvir-list Hello, On Fri, Feb 25, 2011 at 03:41:47PM +0100, Dominik Klein wrote: > See attached logs of another run. > > sysctl -w kernel.sysrq=1 > > echo blk > /sys/kernel/debug/tracing/current_tracer > echo 1 > /sys/block/sdb/trace/enable > echo workqueue_queue_work >> /sys/kernel/debug/tracing/set_event > echo workqueue_activate_work >> /sys/kernel/debug/tracing/set_event > echo workqueue_execute_start >> /sys/kernel/debug/tracing/set_event > echo workqueue_execute_end >> /sys/kernel/debug/tracing/set_event > > That makes attachment trace_pipe5.gz > > echo 8 > /proc/sysrq-trigger > echo t > /proc/sysrq-trigger > > That makes attachment console.gz So, the following work item never finished. We can tell that pid 549 started execution from the last line. <idle>-0 [017] 1497.601733: workqueue_queue_work: work struct=ffff880809f3fe70 function=blk_throtl_work workqueue=ffff88102c8ba700 req_cpu=17 cpu=17 <idle>-0 [017] 1497.601736: workqueue_activate_work: work struct ffff880809f3fe70 <...>-549 [017] 1497.601754: workqueue_execute_start: work struct ffff880809f3fe70: function blk_throtl_work And the stack trace of pid 549 is... [ 1522.220046] kworker/17:1 D ffff88202fc53600 0 549 2 0x00000000 [ 1522.220046] ffff88082c5bd7c0 0000000000000046 ffff88180a822600 ffff88082c578000 [ 1522.220046] 0000000000013600 ffff88080afaffd8 0000000000013600 0000000000013600 [ 1522.220046] ffff88082c5bda98 ffff88082c5bdaa0 ffff88082c5bd7c0 0000000000013600 [ 1522.220046] Call Trace: [ 1522.220046] [<ffffffff810395c6>] ? __wake_up+0x35/0x46 [ 1522.220046] [<ffffffff81315de3>] ? io_schedule+0x68/0xa7 [ 1522.220046] [<ffffffff81182168>] ? get_request_wait+0xee/0x17d [ 1522.220046] [<ffffffff810604f1>] ? autoremove_wake_function+0x0/0x2a [ 1522.220046] [<ffffffff811826b6>] ? __make_request+0x313/0x45d [ 1522.220046] [<ffffffff81180ebd>] ? generic_make_request+0x30d/0x385 [ 1522.220046] [<ffffffff8105cc79>] ? queue_delayed_work_on+0xfc/0x10a [ 1522.220046] [<ffffffff8118c607>] ? blk_throtl_work+0x312/0x32b [ 1522.220046] [<ffffffff8118c2f5>] ? blk_throtl_work+0x0/0x32b [ 1522.220046] [<ffffffff8105b754>] ? process_one_work+0x1d1/0x2ee [ 1522.220046] [<ffffffff8105d1e3>] ? worker_thread+0x12d/0x247 [ 1522.220046] [<ffffffff8105d0b6>] ? worker_thread+0x0/0x247 [ 1522.220046] [<ffffffff8105d0b6>] ? worker_thread+0x0/0x247 [ 1522.220046] [<ffffffff8106009f>] ? kthread+0x7a/0x82 [ 1522.220046] [<ffffffff8100a824>] ? kernel_thread_helper+0x4/0x10 [ 1522.220046] [<ffffffff81060025>] ? kthread+0x0/0x82 [ 1522.220046] [<ffffffff8100a820>] ? kernel_thread_helper+0x0/0x10 The '?'s are because frame pointer is disabled and means that the stack trace is a guesswork. Can you please turn on CONFIG_FRAME_POINTER just to be sure? But at any rate, it looks like blk_throtl_work() got stuck trying to allocate a request. I don't think workqueue is causing any problem here. It seems like a resource deadlock on request. Vivek, any ideas? Thanks. -- tejun ^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: Is it a workqueue related issue in 2.6.37 (Was: Re: [libvirt] blkio cgroup [solved]) 2011-02-25 13:18 ` Tejun Heo 2011-02-25 14:41 ` Dominik Klein @ 2011-02-25 14:57 ` Vivek Goyal 2011-02-25 15:03 ` Tejun Heo 1 sibling, 1 reply; 22+ messages in thread From: Vivek Goyal @ 2011-02-25 14:57 UTC (permalink / raw) To: Tejun Heo; +Cc: Dominik Klein, linux kernel mailing list, libvir-list On Fri, Feb 25, 2011 at 02:18:50PM +0100, Tejun Heo wrote: > Hello, > > On Fri, Feb 25, 2011 at 12:46:16PM +0100, Dominik Klein wrote: > > With 2.6.37 (also tried .1 and .2) it does not work but end up like I > > documented. With 2.6.38-rc1, it does work. With deadline scheduler, it > > also works in 2.6.37. > > Okay, here's the problematic part. > > <idle>-0 [013] 1640.975562: workqueue_queue_work: work struct=ffff88080f14f270 function=blk_throtl_work workqueue=ffff88102c8fc700 req_cpu=13 cpu=13 > <idle>-0 [013] 1640.975564: workqueue_activate_work: work struct ffff88080f14f270 > <...>-477 [013] 1640.975574: workqueue_execute_start: work struct ffff88080f14f270: function blk_throtl_work > <idle>-0 [013] 1641.087450: workqueue_queue_work: work struct=ffff88080f14f270 function=blk_throtl_work workqueue=ffff88102c8fc700 req_cpu=13 cpu=13 > > The workqueue is per-cpu, so we only need to follow cpu=13 cases. > @1640, blk_throtl_work() is queued, activated and starts executing but > never finishes. The same work item is never executed more than once > at the same on the same CPU, so when the next work item is queued, it > doesn't get activated until the previous execution is complete. > > The next thing to do would be finding out why blk_throtl_work() isn't > finishing. sysrq-t or /proc/PID/stack should show us where it's > stalled. Hi Tejun, blk_throtl_work() calls generic_make_request() to dispatch some bios and I guess blk_throtl_work() has been put to sleep because threre are no request descriptors available and CFQ is frozen so no requests descriptors get freed hence blk_throtl_work() never finishes. Following caught my eye. ksoftirqd/0-3 [000] 1640.983585: 8,16 m N cfq4810 slice expired t=0 ksoftirqd/0-3 [000] 1640.983588: 8,16 m N cfq4810 sl_used=2 disp=6 charge=2 iops=0 sect=2080 ksoftirqd/0-3 [000] 1640.983589: 8,16 m N cfq4810 del_from_rr ksoftirqd/0-3 [000] 1640.983591: 8,16 m N cfq schedule dispatch sshd-3125 [004] 1640.983597: workqueue_queue_work: work struct=ffff88102c3a3110 function=flush_to_ldisc workqueue=ffff88182c834a00 req_cpu=4 cpu=4 sshd-3125 [004] 1640.983598: workqueue_activate_work: work struct ffff88102c3a3110 CFQ tries to schedule a work and but there is no associated "workqueue_queue_work" trace. So it looks like that work never got queued. CFQ calls following. cfq_log(cfqd, "schedule dispatch"); kblockd_schedule_work(cfqd->queue, &cfqd->unplug_work); We do see "schedule dispatch" message and kblockd_schedule_work() calls queue_work(). So what happended here? This is strange. I will put one more trace after kblockd_schedule_work() to trace that function returned. Thanks Vivek ^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: Is it a workqueue related issue in 2.6.37 (Was: Re: [libvirt] blkio cgroup [solved]) 2011-02-25 14:57 ` Vivek Goyal @ 2011-02-25 15:03 ` Tejun Heo 2011-02-25 15:11 ` Vivek Goyal 0 siblings, 1 reply; 22+ messages in thread From: Tejun Heo @ 2011-02-25 15:03 UTC (permalink / raw) To: Vivek Goyal; +Cc: Dominik Klein, linux kernel mailing list, libvir-list Hello, On Fri, Feb 25, 2011 at 09:57:08AM -0500, Vivek Goyal wrote: > blk_throtl_work() calls generic_make_request() to dispatch some bios and I > guess blk_throtl_work() has been put to sleep because threre are no request > descriptors available and CFQ is frozen so no requests descriptors get freed > hence blk_throtl_work() never finishes. > > Following caught my eye. > > ksoftirqd/0-3 [000] 1640.983585: 8,16 m N cfq4810 slice > expired t=0 > ksoftirqd/0-3 [000] 1640.983588: 8,16 m N cfq4810 > sl_used=2 disp=6 charge=2 iops=0 sect=2080 > ksoftirqd/0-3 [000] 1640.983589: 8,16 m N cfq4810 > del_from_rr > ksoftirqd/0-3 [000] 1640.983591: 8,16 m N cfq schedule > dispatch > sshd-3125 [004] 1640.983597: workqueue_queue_work: work > struct=ffff88102c3a3110 function=flush_to_ldisc workqueue=ffff88182c834a00 > req_cpu=4 cpu=4 > sshd-3125 [004] 1640.983598: workqueue_activate_work: work > struct ffff88102c3a3110 > > CFQ tries to schedule a work and but there is no associated > "workqueue_queue_work" trace. So it looks like that work never got queued. > > CFQ calls following. > > cfq_log(cfqd, "schedule dispatch"); > kblockd_schedule_work(cfqd->queue, &cfqd->unplug_work); > > We do see "schedule dispatch" message and kblockd_schedule_work() calls > queue_work(). So what happended here? This is strange. I will put one > more trace after kblockd_schedule_work() to trace that function returned. It could be that the unplug work was already queued and in pending state. The second queueing request will be ignored then. So, I think the problem is that blk_throtl_work() occupies kblockd but requires another work item (unplug_work) to make forward progress. In such cases, forward progress cannot be guaranteed. Either blk_throtl_work() or cfq unplug work should use a separate workqueue. Thanks. -- tejun ^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: Is it a workqueue related issue in 2.6.37 (Was: Re: [libvirt] blkio cgroup [solved]) 2011-02-25 15:03 ` Tejun Heo @ 2011-02-25 15:11 ` Vivek Goyal 2011-02-25 15:15 ` Vivek Goyal 2011-02-25 16:03 ` Vivek Goyal 0 siblings, 2 replies; 22+ messages in thread From: Vivek Goyal @ 2011-02-25 15:11 UTC (permalink / raw) To: Tejun Heo; +Cc: Dominik Klein, linux kernel mailing list, libvir-list On Fri, Feb 25, 2011 at 04:03:29PM +0100, Tejun Heo wrote: > Hello, > > On Fri, Feb 25, 2011 at 09:57:08AM -0500, Vivek Goyal wrote: > > blk_throtl_work() calls generic_make_request() to dispatch some bios and I > > guess blk_throtl_work() has been put to sleep because threre are no request > > descriptors available and CFQ is frozen so no requests descriptors get freed > > hence blk_throtl_work() never finishes. > > > > Following caught my eye. > > > > ksoftirqd/0-3 [000] 1640.983585: 8,16 m N cfq4810 slice > > expired t=0 > > ksoftirqd/0-3 [000] 1640.983588: 8,16 m N cfq4810 > > sl_used=2 disp=6 charge=2 iops=0 sect=2080 > > ksoftirqd/0-3 [000] 1640.983589: 8,16 m N cfq4810 > > del_from_rr > > ksoftirqd/0-3 [000] 1640.983591: 8,16 m N cfq schedule > > dispatch > > sshd-3125 [004] 1640.983597: workqueue_queue_work: work > > struct=ffff88102c3a3110 function=flush_to_ldisc workqueue=ffff88182c834a00 > > req_cpu=4 cpu=4 > > sshd-3125 [004] 1640.983598: workqueue_activate_work: work > > struct ffff88102c3a3110 > > > > CFQ tries to schedule a work and but there is no associated > > "workqueue_queue_work" trace. So it looks like that work never got queued. > > > > CFQ calls following. > > > > cfq_log(cfqd, "schedule dispatch"); > > kblockd_schedule_work(cfqd->queue, &cfqd->unplug_work); > > > > We do see "schedule dispatch" message and kblockd_schedule_work() calls > > queue_work(). So what happended here? This is strange. I will put one > > more trace after kblockd_schedule_work() to trace that function returned. > > It could be that the unplug work was already queued and in pending > state. The second queueing request will be ignored then. So, I think > the problem is that blk_throtl_work() occupies kblockd but requires > another work item (unplug_work) to make forward progress. In such > cases, forward progress cannot be guaranteed. Either > blk_throtl_work() or cfq unplug work should use a separate workqueue. Ok, that would make sense. So blk_throtl_work() can not finish as CFQ is not making progress and no request descriptors are being freed and unplug_work() is not being called because blk_throtl_work() has not finished. So that's cyclic dependency and I should use a separate work queue for queueing throttle related work. I will write a patch. Thanks Vivek ^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: Is it a workqueue related issue in 2.6.37 (Was: Re: [libvirt] blkio cgroup [solved]) 2011-02-25 15:11 ` Vivek Goyal @ 2011-02-25 15:15 ` Vivek Goyal 2011-02-25 16:03 ` Vivek Goyal 1 sibling, 0 replies; 22+ messages in thread From: Vivek Goyal @ 2011-02-25 15:15 UTC (permalink / raw) To: Tejun Heo; +Cc: Dominik Klein, linux kernel mailing list, libvir-list On Fri, Feb 25, 2011 at 10:11:13AM -0500, Vivek Goyal wrote: > On Fri, Feb 25, 2011 at 04:03:29PM +0100, Tejun Heo wrote: > > Hello, > > > > On Fri, Feb 25, 2011 at 09:57:08AM -0500, Vivek Goyal wrote: > > > blk_throtl_work() calls generic_make_request() to dispatch some bios and I > > > guess blk_throtl_work() has been put to sleep because threre are no request > > > descriptors available and CFQ is frozen so no requests descriptors get freed > > > hence blk_throtl_work() never finishes. > > > > > > Following caught my eye. > > > > > > ksoftirqd/0-3 [000] 1640.983585: 8,16 m N cfq4810 slice > > > expired t=0 > > > ksoftirqd/0-3 [000] 1640.983588: 8,16 m N cfq4810 > > > sl_used=2 disp=6 charge=2 iops=0 sect=2080 > > > ksoftirqd/0-3 [000] 1640.983589: 8,16 m N cfq4810 > > > del_from_rr > > > ksoftirqd/0-3 [000] 1640.983591: 8,16 m N cfq schedule > > > dispatch > > > sshd-3125 [004] 1640.983597: workqueue_queue_work: work > > > struct=ffff88102c3a3110 function=flush_to_ldisc workqueue=ffff88182c834a00 > > > req_cpu=4 cpu=4 > > > sshd-3125 [004] 1640.983598: workqueue_activate_work: work > > > struct ffff88102c3a3110 > > > > > > CFQ tries to schedule a work and but there is no associated > > > "workqueue_queue_work" trace. So it looks like that work never got queued. > > > > > > CFQ calls following. > > > > > > cfq_log(cfqd, "schedule dispatch"); > > > kblockd_schedule_work(cfqd->queue, &cfqd->unplug_work); > > > > > > We do see "schedule dispatch" message and kblockd_schedule_work() calls > > > queue_work(). So what happended here? This is strange. I will put one > > > more trace after kblockd_schedule_work() to trace that function returned. > > > > It could be that the unplug work was already queued and in pending > > state. The second queueing request will be ignored then. So, I think > > the problem is that blk_throtl_work() occupies kblockd but requires > > another work item (unplug_work) to make forward progress. In such > > cases, forward progress cannot be guaranteed. Either > > blk_throtl_work() or cfq unplug work should use a separate workqueue. > > Ok, that would make sense. So blk_throtl_work() can not finish as CFQ > is not making progress and no request descriptors are being freed and > unplug_work() is not being called because blk_throtl_work() has not finished. > So that's cyclic dependency and I should use a separate work queue for > queueing throttle related work. I will write a patch. The only thing unexplained is why same problem does not happen in 2.6.38-rc kernels. Thanks Vivek ^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: Is it a workqueue related issue in 2.6.37 (Was: Re: [libvirt] blkio cgroup [solved]) 2011-02-25 15:11 ` Vivek Goyal 2011-02-25 15:15 ` Vivek Goyal @ 2011-02-25 16:03 ` Vivek Goyal 2011-02-25 16:09 ` Tejun Heo 1 sibling, 1 reply; 22+ messages in thread From: Vivek Goyal @ 2011-02-25 16:03 UTC (permalink / raw) To: Dominik Klein; +Cc: linux kernel mailing list, libvir-list, Tejun Heo On Fri, Feb 25, 2011 at 10:11:13AM -0500, Vivek Goyal wrote: > On Fri, Feb 25, 2011 at 04:03:29PM +0100, Tejun Heo wrote: > > Hello, > > > > On Fri, Feb 25, 2011 at 09:57:08AM -0500, Vivek Goyal wrote: > > > blk_throtl_work() calls generic_make_request() to dispatch some bios and I > > > guess blk_throtl_work() has been put to sleep because threre are no request > > > descriptors available and CFQ is frozen so no requests descriptors get freed > > > hence blk_throtl_work() never finishes. > > > > > > Following caught my eye. > > > > > > ksoftirqd/0-3 [000] 1640.983585: 8,16 m N cfq4810 slice > > > expired t=0 > > > ksoftirqd/0-3 [000] 1640.983588: 8,16 m N cfq4810 > > > sl_used=2 disp=6 charge=2 iops=0 sect=2080 > > > ksoftirqd/0-3 [000] 1640.983589: 8,16 m N cfq4810 > > > del_from_rr > > > ksoftirqd/0-3 [000] 1640.983591: 8,16 m N cfq schedule > > > dispatch > > > sshd-3125 [004] 1640.983597: workqueue_queue_work: work > > > struct=ffff88102c3a3110 function=flush_to_ldisc workqueue=ffff88182c834a00 > > > req_cpu=4 cpu=4 > > > sshd-3125 [004] 1640.983598: workqueue_activate_work: work > > > struct ffff88102c3a3110 > > > > > > CFQ tries to schedule a work and but there is no associated > > > "workqueue_queue_work" trace. So it looks like that work never got queued. > > > > > > CFQ calls following. > > > > > > cfq_log(cfqd, "schedule dispatch"); > > > kblockd_schedule_work(cfqd->queue, &cfqd->unplug_work); > > > > > > We do see "schedule dispatch" message and kblockd_schedule_work() calls > > > queue_work(). So what happended here? This is strange. I will put one > > > more trace after kblockd_schedule_work() to trace that function returned. > > > > It could be that the unplug work was already queued and in pending > > state. The second queueing request will be ignored then. So, I think > > the problem is that blk_throtl_work() occupies kblockd but requires > > another work item (unplug_work) to make forward progress. In such > > cases, forward progress cannot be guaranteed. Either > > blk_throtl_work() or cfq unplug work should use a separate workqueue. > > Ok, that would make sense. So blk_throtl_work() can not finish as CFQ > is not making progress and no request descriptors are being freed and > unplug_work() is not being called because blk_throtl_work() has not finished. > So that's cyclic dependency and I should use a separate work queue for > queueing throttle related work. I will write a patch. > Hi Dominik, Can you please try attached patch and see if fixes the issue. Thanks Vivek o Use a separate workqueue for throttle related work and don't reuse kblockd workqueue as there occurs a cycle dependency in cfq unplug work and throttle dispatch work. Yet-to-be-signed-off-by: Vivek Goyal <vgoyal@redhat.com> --- block/blk-throttle.c | 28 ++++++++++++++++++++-------- include/linux/blkdev.h | 2 -- 2 files changed, 20 insertions(+), 10 deletions(-) Index: linux-2.6/block/blk-throttle.c =================================================================== --- linux-2.6.orig/block/blk-throttle.c 2011-02-21 22:30:39.000000000 -0500 +++ linux-2.6/block/blk-throttle.c 2011-02-25 10:53:51.884672758 -0500 @@ -20,6 +20,10 @@ static int throtl_quantum = 32; /* Throttling is performed over 100ms slice and after that slice is renewed */ static unsigned long throtl_slice = HZ/10; /* 100 ms */ +/* A workqueue to queue throttle related work */ +static struct workqueue_struct *kthrotld_workqueue; +void throtl_schedule_delayed_work(struct throtl_data *td, unsigned long delay); + struct throtl_rb_root { struct rb_root rb; struct rb_node *left; @@ -146,6 +150,12 @@ static inline struct throtl_grp *throtl_ return tg; } +int kthrotld_schedule_delayed_work(struct throtl_data *td, + struct delayed_work *dwork, unsigned long delay) +{ + return queue_delayed_work(kthrotld_workqueue, dwork, delay); +} + static void throtl_put_tg(struct throtl_grp *tg) { BUG_ON(atomic_read(&tg->ref) <= 0); @@ -346,10 +356,9 @@ static void throtl_schedule_next_dispatc update_min_dispatch_time(st); if (time_before_eq(st->min_disptime, jiffies)) - throtl_schedule_delayed_work(td->queue, 0); + throtl_schedule_delayed_work(td, 0); else - throtl_schedule_delayed_work(td->queue, - (st->min_disptime - jiffies)); + throtl_schedule_delayed_work(td, (st->min_disptime - jiffies)); } static inline void @@ -809,10 +818,9 @@ void blk_throtl_work(struct work_struct } /* Call with queue lock held */ -void throtl_schedule_delayed_work(struct request_queue *q, unsigned long delay) +void throtl_schedule_delayed_work(struct throtl_data *td, unsigned long delay) { - struct throtl_data *td = q->td; struct delayed_work *dwork = &td->throtl_work; if (total_nr_queued(td) > 0) { @@ -821,12 +829,11 @@ void throtl_schedule_delayed_work(struct * Cancel that and schedule a new one. */ __cancel_delayed_work(dwork); - kblockd_schedule_delayed_work(q, dwork, delay); + kthrotld_schedule_delayed_work(td, dwork, delay); throtl_log(td, "schedule work. delay=%lu jiffies=%lu", delay, jiffies); } } -EXPORT_SYMBOL(throtl_schedule_delayed_work); static void throtl_destroy_tg(struct throtl_data *td, struct throtl_grp *tg) @@ -895,7 +902,7 @@ static void throtl_update_blkio_group_co xchg(&tg->limits_changed, true); xchg(&td->limits_changed, true); /* Schedule a work now to process the limit change */ - throtl_schedule_delayed_work(td->queue, 0); + throtl_schedule_delayed_work(td, 0); } /* @@ -1113,6 +1120,11 @@ void blk_throtl_exit(struct request_queu static int __init throtl_init(void) { + kthrotld_workqueue = alloc_workqueue("kthrotld", + WQ_MEM_RECLAIM | WQ_HIGHPRI, 0); + if (!kthrotld_workqueue) + panic("Failed to create kthrotld\n"); + blkio_policy_register(&blkio_policy_throtl); return 0; } Index: linux-2.6/include/linux/blkdev.h =================================================================== --- linux-2.6.orig/include/linux/blkdev.h 2011-02-21 22:30:39.000000000 -0500 +++ linux-2.6/include/linux/blkdev.h 2011-02-25 10:50:50.706137004 -0500 @@ -1136,7 +1136,6 @@ static inline uint64_t rq_io_start_time_ extern int blk_throtl_init(struct request_queue *q); extern void blk_throtl_exit(struct request_queue *q); extern int blk_throtl_bio(struct request_queue *q, struct bio **bio); -extern void throtl_schedule_delayed_work(struct request_queue *q, unsigned long delay); #else /* CONFIG_BLK_DEV_THROTTLING */ static inline int blk_throtl_bio(struct request_queue *q, struct bio **bio) { @@ -1145,7 +1144,6 @@ static inline int blk_throtl_bio(struct static inline int blk_throtl_init(struct request_queue *q) { return 0; } static inline int blk_throtl_exit(struct request_queue *q) { return 0; } -static inline void throtl_schedule_delayed_work(struct request_queue *q, unsigned long delay) {} #endif /* CONFIG_BLK_DEV_THROTTLING */ #define MODULE_ALIAS_BLOCKDEV(major,minor) \ ^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: Is it a workqueue related issue in 2.6.37 (Was: Re: [libvirt] blkio cgroup [solved]) 2011-02-25 16:03 ` Vivek Goyal @ 2011-02-25 16:09 ` Tejun Heo 2011-02-25 16:19 ` Vivek Goyal 2011-02-25 16:30 ` Vivek Goyal 0 siblings, 2 replies; 22+ messages in thread From: Tejun Heo @ 2011-02-25 16:09 UTC (permalink / raw) To: Vivek Goyal; +Cc: Dominik Klein, linux kernel mailing list, libvir-list Hello, On Fri, Feb 25, 2011 at 11:03:53AM -0500, Vivek Goyal wrote: > +int kthrotld_schedule_delayed_work(struct throtl_data *td, > + struct delayed_work *dwork, unsigned long delay) > +{ > + return queue_delayed_work(kthrotld_workqueue, dwork, delay); > +} > + I don't think wrapping is necessary. Defining and using a workqueue directly should be enough. > @@ -1113,6 +1120,11 @@ void blk_throtl_exit(struct request_queu > > static int __init throtl_init(void) > { > + kthrotld_workqueue = alloc_workqueue("kthrotld", > + WQ_MEM_RECLAIM | WQ_HIGHPRI, 0); And I don't think kthrotld needs to be HIGHPRI. Thanks. -- tejun ^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: Is it a workqueue related issue in 2.6.37 (Was: Re: [libvirt] blkio cgroup [solved]) 2011-02-25 16:09 ` Tejun Heo @ 2011-02-25 16:19 ` Vivek Goyal 2011-02-25 16:30 ` Vivek Goyal 1 sibling, 0 replies; 22+ messages in thread From: Vivek Goyal @ 2011-02-25 16:19 UTC (permalink / raw) To: Tejun Heo; +Cc: Dominik Klein, linux kernel mailing list, libvir-list On Fri, Feb 25, 2011 at 05:09:03PM +0100, Tejun Heo wrote: > Hello, > > On Fri, Feb 25, 2011 at 11:03:53AM -0500, Vivek Goyal wrote: > > +int kthrotld_schedule_delayed_work(struct throtl_data *td, > > + struct delayed_work *dwork, unsigned long delay) > > +{ > > + return queue_delayed_work(kthrotld_workqueue, dwork, delay); > > +} > > + > > I don't think wrapping is necessary. Defining and using a workqueue > directly should be enough. > > > @@ -1113,6 +1120,11 @@ void blk_throtl_exit(struct request_queu > > > > static int __init throtl_init(void) > > { > > + kthrotld_workqueue = alloc_workqueue("kthrotld", > > + WQ_MEM_RECLAIM | WQ_HIGHPRI, 0); > > And I don't think kthrotld needs to be HIGHPRI. Ok, regenerating the patch with above change. Anyway I had to as I generated this patch on top of some of my local commits which are not in Linus tree yet. Thanks Vivek ^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: Is it a workqueue related issue in 2.6.37 (Was: Re: [libvirt] blkio cgroup [solved]) 2011-02-25 16:09 ` Tejun Heo 2011-02-25 16:19 ` Vivek Goyal @ 2011-02-25 16:30 ` Vivek Goyal 2011-02-25 16:56 ` Dominik Klein 1 sibling, 1 reply; 22+ messages in thread From: Vivek Goyal @ 2011-02-25 16:30 UTC (permalink / raw) To: Tejun Heo; +Cc: Dominik Klein, linux kernel mailing list, libvir-list On Fri, Feb 25, 2011 at 05:09:03PM +0100, Tejun Heo wrote: > Hello, > > On Fri, Feb 25, 2011 at 11:03:53AM -0500, Vivek Goyal wrote: > > +int kthrotld_schedule_delayed_work(struct throtl_data *td, > > + struct delayed_work *dwork, unsigned long delay) > > +{ > > + return queue_delayed_work(kthrotld_workqueue, dwork, delay); > > +} > > + > > I don't think wrapping is necessary. Defining and using a workqueue > directly should be enough. > > > @@ -1113,6 +1120,11 @@ void blk_throtl_exit(struct request_queu > > > > static int __init throtl_init(void) > > { > > + kthrotld_workqueue = alloc_workqueue("kthrotld", > > + WQ_MEM_RECLAIM | WQ_HIGHPRI, 0); > > And I don't think kthrotld needs to be HIGHPRI. > > Thanks. Here is the new patch. Dominik, can you please try it and see if fixes the issue. Thanks Vivek o Use a separate workqueue for throttle related work and don't reuse kblockd workqueue as there occurs a cycle dependency in cfq unplug work and throttle dispatch work. Yet-to-be-signed-off-by: Vivek Goyal <vgoyal@redhat.com> --- block/blk-throttle.c | 27 ++++++++++++++++----------- include/linux/blkdev.h | 2 -- 2 files changed, 16 insertions(+), 13 deletions(-) Index: linux-2.6/block/blk-throttle.c =================================================================== --- linux-2.6.orig/block/blk-throttle.c 2011-02-25 11:11:49.000000000 -0500 +++ linux-2.6/block/blk-throttle.c 2011-02-25 11:21:51.709326299 -0500 @@ -20,6 +20,10 @@ static int throtl_quantum = 32; /* Throttling is performed over 100ms slice and after that slice is renewed */ static unsigned long throtl_slice = HZ/10; /* 100 ms */ +/* A workqueue to queue throttle related work */ +static struct workqueue_struct *kthrotld_workqueue; +void throtl_schedule_delayed_work(struct throtl_data *td, unsigned long delay); + struct throtl_rb_root { struct rb_root rb; struct rb_node *left; @@ -345,10 +349,9 @@ static void throtl_schedule_next_dispatc update_min_dispatch_time(st); if (time_before_eq(st->min_disptime, jiffies)) - throtl_schedule_delayed_work(td->queue, 0); + throtl_schedule_delayed_work(td, 0); else - throtl_schedule_delayed_work(td->queue, - (st->min_disptime - jiffies)); + throtl_schedule_delayed_work(td, (st->min_disptime - jiffies)); } static inline void @@ -815,10 +818,9 @@ void blk_throtl_work(struct work_struct } /* Call with queue lock held */ -void throtl_schedule_delayed_work(struct request_queue *q, unsigned long delay) +void throtl_schedule_delayed_work(struct throtl_data *td, unsigned long delay) { - struct throtl_data *td = q->td; struct delayed_work *dwork = &td->throtl_work; if (total_nr_queued(td) > 0) { @@ -827,12 +829,11 @@ void throtl_schedule_delayed_work(struct * Cancel that and schedule a new one. */ __cancel_delayed_work(dwork); - kblockd_schedule_delayed_work(q, dwork, delay); + queue_delayed_work(kthrotld_workqueue, dwork, delay); throtl_log(td, "schedule work. delay=%lu jiffies=%lu", delay, jiffies); } } -EXPORT_SYMBOL(throtl_schedule_delayed_work); static void throtl_destroy_tg(struct throtl_data *td, struct throtl_grp *tg) @@ -920,7 +921,7 @@ static void throtl_update_blkio_group_re smp_mb__after_atomic_inc(); /* Schedule a work now to process the limit change */ - throtl_schedule_delayed_work(td->queue, 0); + throtl_schedule_delayed_work(td, 0); } static void throtl_update_blkio_group_write_bps(void *key, @@ -934,7 +935,7 @@ static void throtl_update_blkio_group_wr smp_mb__before_atomic_inc(); atomic_inc(&td->limits_changed); smp_mb__after_atomic_inc(); - throtl_schedule_delayed_work(td->queue, 0); + throtl_schedule_delayed_work(td, 0); } static void throtl_update_blkio_group_read_iops(void *key, @@ -948,7 +949,7 @@ static void throtl_update_blkio_group_re smp_mb__before_atomic_inc(); atomic_inc(&td->limits_changed); smp_mb__after_atomic_inc(); - throtl_schedule_delayed_work(td->queue, 0); + throtl_schedule_delayed_work(td, 0); } static void throtl_update_blkio_group_write_iops(void *key, @@ -962,7 +963,7 @@ static void throtl_update_blkio_group_wr smp_mb__before_atomic_inc(); atomic_inc(&td->limits_changed); smp_mb__after_atomic_inc(); - throtl_schedule_delayed_work(td->queue, 0); + throtl_schedule_delayed_work(td, 0); } void throtl_shutdown_timer_wq(struct request_queue *q) @@ -1135,6 +1136,10 @@ void blk_throtl_exit(struct request_queu static int __init throtl_init(void) { + kthrotld_workqueue = alloc_workqueue("kthrotld", WQ_MEM_RECLAIM, 0); + if (!kthrotld_workqueue) + panic("Failed to create kthrotld\n"); + blkio_policy_register(&blkio_policy_throtl); return 0; } Index: linux-2.6/include/linux/blkdev.h =================================================================== --- linux-2.6.orig/include/linux/blkdev.h 2011-02-25 11:11:49.000000000 -0500 +++ linux-2.6/include/linux/blkdev.h 2011-02-25 11:13:03.670455489 -0500 @@ -1136,7 +1136,6 @@ static inline uint64_t rq_io_start_time_ extern int blk_throtl_init(struct request_queue *q); extern void blk_throtl_exit(struct request_queue *q); extern int blk_throtl_bio(struct request_queue *q, struct bio **bio); -extern void throtl_schedule_delayed_work(struct request_queue *q, unsigned long delay); extern void throtl_shutdown_timer_wq(struct request_queue *q); #else /* CONFIG_BLK_DEV_THROTTLING */ static inline int blk_throtl_bio(struct request_queue *q, struct bio **bio) @@ -1146,7 +1145,6 @@ static inline int blk_throtl_bio(struct static inline int blk_throtl_init(struct request_queue *q) { return 0; } static inline int blk_throtl_exit(struct request_queue *q) { return 0; } -static inline void throtl_schedule_delayed_work(struct request_queue *q, unsigned long delay) {} static inline void throtl_shutdown_timer_wq(struct request_queue *q) {} #endif /* CONFIG_BLK_DEV_THROTTLING */ ^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: Is it a workqueue related issue in 2.6.37 (Was: Re: [libvirt] blkio cgroup [solved]) 2011-02-25 16:30 ` Vivek Goyal @ 2011-02-25 16:56 ` Dominik Klein 0 siblings, 0 replies; 22+ messages in thread From: Dominik Klein @ 2011-02-25 16:56 UTC (permalink / raw) To: Vivek Goyal; +Cc: Tejun Heo, linux kernel mailing list, libvir-list > Here is the new patch. > > Dominik, can you please try it and see if fixes the issue. It does. Have a nice weekend, Dominik ^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: Is it a workqueue related issue in 2.6.37 (Was: Re: [libvirt] blkio cgroup [solved]) 2011-02-25 7:24 ` Dominik Klein 2011-02-25 11:29 ` Tejun Heo @ 2011-02-25 19:53 ` Steven Rostedt 2011-02-25 20:18 ` Vivek Goyal 1 sibling, 1 reply; 22+ messages in thread From: Steven Rostedt @ 2011-02-25 19:53 UTC (permalink / raw) To: Dominik Klein Cc: Tejun Heo, Vivek Goyal, linux kernel mailing list, libvir-list On Fri, Feb 25, 2011 at 08:24:15AM +0100, Dominik Klein wrote: > > Maybe watching what the workqueue is doing using the following trace > > events could be helpful. > > > > # grep workqueue /sys/kernel/debug/tracing/available_events > > workqueue:workqueue_insertion > > workqueue:workqueue_execution > > workqueue:workqueue_creation > > workqueue:workqueue_destruction > > Since I've never done this before, I will tell you how I captured the > trace so you know what I did and can see whether I did something wrong > and can correct me if necessary. > > echo blk > /sys/kernel/debug/tracing/current_tracer > echo 1 > /sys/block/sdb/trace/enable > echo workqueue_queue_work >> /sys/kernel/debug/tracing/set_event > echo workqueue_activate_work >> /sys/kernel/debug/tracing/set_event > echo workqueue_execute_start >> /sys/kernel/debug/tracing/set_event > echo workqueue_execute_end >> /sys/kernel/debug/tracing/set_event > cat /sys/kernel/debug/tracing/trace_pipe Just an FYI, If you download trace-cmd from: git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/trace-cmd.git You could do the following: # trace-cmd record -p blk -e workqueue_queue_work -e workqueue_activate_work -e workqueue_execute_start -e workqueue_execute_end <cmd> Where <cmd> could be sh -c "echo 1 > /sys/block/sdb/trace/enable; sleep 10" This will create a trace.dat file that you can read with trace-cmd report Another way, if you want to do more than just that echo is to use: # trace-cmd start -p blk -e workqueue_queue_work -e workqueue_activate_work -e workqueue_execute_start -e workqueue_execute_end # echo 1 > /sys/block/sdb/trace/enable # <do anyting you want> # trace-cmd stop # trace-cmd extract The start just enable ftrace (like you did with the echos). The extract will create the trace.dat file from the ftrace ring buffers. You could alse just cat trace_pipe, which would do the same, or you could do the echos, and then the trace-cmd stop and extract to get the file. It's your choice ;) You can then gzip the trace.dat file and send it to others that can read it as well, as all the information needed to read the trace is recorded in the file. You could even send the data over the network instead of writing the trace.dat locally. On another box: $ trace-cmd listen -p 12345 Then on the target: # trace-cmd record -N host:12345 ... This will send the data to the listening host and the file will be created on the host side. One added benefit of having the trace.dat file. kernelshark can read it. -- Steve > > And that output i gzip'd. > > This was taken with 2.6.37 plus the patch you mentioned on a Dell R815 > with 2 12 core AMD Opteron 6174 CPUs. If you need any more information, > please let me know. > ^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: Is it a workqueue related issue in 2.6.37 (Was: Re: [libvirt] blkio cgroup [solved]) 2011-02-25 19:53 ` Steven Rostedt @ 2011-02-25 20:18 ` Vivek Goyal 2011-02-26 2:47 ` Steven Rostedt 0 siblings, 1 reply; 22+ messages in thread From: Vivek Goyal @ 2011-02-25 20:18 UTC (permalink / raw) To: Steven Rostedt Cc: Dominik Klein, Tejun Heo, linux kernel mailing list, libvir-list On Fri, Feb 25, 2011 at 02:53:08PM -0500, Steven Rostedt wrote: > On Fri, Feb 25, 2011 at 08:24:15AM +0100, Dominik Klein wrote: > > > Maybe watching what the workqueue is doing using the following trace > > > events could be helpful. > > > > > > # grep workqueue /sys/kernel/debug/tracing/available_events > > > workqueue:workqueue_insertion > > > workqueue:workqueue_execution > > > workqueue:workqueue_creation > > > workqueue:workqueue_destruction > > > > Since I've never done this before, I will tell you how I captured the > > trace so you know what I did and can see whether I did something wrong > > and can correct me if necessary. > > > > echo blk > /sys/kernel/debug/tracing/current_tracer > > echo 1 > /sys/block/sdb/trace/enable > > echo workqueue_queue_work >> /sys/kernel/debug/tracing/set_event > > echo workqueue_activate_work >> /sys/kernel/debug/tracing/set_event > > echo workqueue_execute_start >> /sys/kernel/debug/tracing/set_event > > echo workqueue_execute_end >> /sys/kernel/debug/tracing/set_event > > cat /sys/kernel/debug/tracing/trace_pipe > > Just an FYI, > > If you download trace-cmd from: > > git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/trace-cmd.git > > You could do the following: > > # trace-cmd record -p blk -e workqueue_queue_work -e workqueue_activate_work -e workqueue_execute_start -e workqueue_execute_end <cmd> > > Where <cmd> could be > sh -c "echo 1 > /sys/block/sdb/trace/enable; sleep 10" > > This will create a trace.dat file that you can read with > trace-cmd report > > Another way, if you want to do more than just that echo is to use: > > # trace-cmd start -p blk -e workqueue_queue_work -e workqueue_activate_work -e workqueue_execute_start -e workqueue_execute_end > # echo 1 > /sys/block/sdb/trace/enable > # <do anyting you want> > # trace-cmd stop > # trace-cmd extract > > The start just enable ftrace (like you did with the echos). > The extract will create the trace.dat file from the ftrace ring buffers. > You could alse just cat trace_pipe, which would do the same, or you > could do the echos, and then the trace-cmd stop and extract to get the > file. It's your choice ;) > > You can then gzip the trace.dat file and send it to others that can read > it as well, as all the information needed to read the trace is recorded > in the file. > > You could even send the data over the network instead of writing the > trace.dat locally. > > On another box: > > $ trace-cmd listen -p 12345 > > Then on the target: > > # trace-cmd record -N host:12345 ... > > This will send the data to the listening host and the file will be > created on the host side. Thanks Steve. In this case this feature of sending trace data over network would have helped. We were running into issues where IO scheduler was freezing so we could not read anything from disk(including saved traces). Hence we were directing everything to console and then doing copy paste. So sending it over network would have probably worked even in this case. Will give trace-cmd a try next time. Thanks Vivek ^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: Is it a workqueue related issue in 2.6.37 (Was: Re: [libvirt] blkio cgroup [solved]) 2011-02-25 20:18 ` Vivek Goyal @ 2011-02-26 2:47 ` Steven Rostedt 0 siblings, 0 replies; 22+ messages in thread From: Steven Rostedt @ 2011-02-26 2:47 UTC (permalink / raw) To: Vivek Goyal Cc: Dominik Klein, Tejun Heo, linux kernel mailing list, libvir-list On Fri, 2011-02-25 at 15:18 -0500, Vivek Goyal wrote: > Thanks Steve. In this case this feature of sending trace data over > network would have helped. We were running into issues where IO scheduler > was freezing so we could not read anything from disk(including saved traces). > Hence we were directing everything to console and then doing copy paste. > > So sending it over network would have probably worked even in this case. > > Will give trace-cmd a try next time. Hi Vivek, I just pushed out a hack that lets trace-cmd read the blktrace. I found that the blktrace never exported its structure to the /debug/tracing/events/ftrace/blktrace/format. So userspace has no real way to know how to parse it. Instead, I wrote a hack that creates this file semi dynamically, based on the information of other events. I also copied a lot of the blktrace code from the kernel so that it can print out the same format. You need to install the plugin that is built with trace-cmd. It will either be automatically installed if you do a make install, but if you do not have root access, just cp the plugin_blk.so into ~/.trace-cmd/plugins directory (you may need to make that directory yourself). Then when you run trace-cmd report on a file made with the blk tracer, it will give you a nice output. If you already have a trace.dat file from a previous extract, you don't need to run the trace again. The trace-cmd report will work on that file now. I'll be heading out to NYC on Monday for the End Users Conf and this weekend I need to get all my chores done around the house for the misses to let me go ;) Thus, I wont be doing much more till I get back at the end of next week. -- Steve ^ permalink raw reply [flat|nested] 22+ messages in thread
end of thread, other threads:[~2011-02-26 2:47 UTC | newest]
Thread overview: 22+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
[not found] <20110218163137.GF26654@redhat.com>
[not found] ` <4D621B9B.3070205@in-telegence.net>
[not found] ` <4D622002.2040604@in-telegence.net>
[not found] ` <4D6222A8.6090303@in-telegence.net>
[not found] ` <20110221184442.GM6428@redhat.com>
[not found] ` <4D63BA3B.7070809@in-telegence.net>
[not found] ` <20110222152426.GD28269@redhat.com>
[not found] ` <20110222190953.GF28269@redhat.com>
[not found] ` <4D650D7E.4050908@in-telegence.net>
[not found] ` <4D662248.6040405@in-telegence.net>
2011-02-24 14:23 ` Is it a workqueue related issue in 2.6.37 (Was: Re: [libvirt] blkio cgroup [solved]) Vivek Goyal
2011-02-24 14:31 ` Tejun Heo
2011-02-24 14:58 ` Dominik Klein
2011-02-24 15:17 ` Tejun Heo
2011-02-25 7:24 ` Dominik Klein
2011-02-25 11:29 ` Tejun Heo
2011-02-25 11:46 ` Dominik Klein
2011-02-25 13:18 ` Tejun Heo
2011-02-25 14:41 ` Dominik Klein
2011-02-25 14:55 ` Tejun Heo
2011-02-25 14:57 ` Vivek Goyal
2011-02-25 15:03 ` Tejun Heo
2011-02-25 15:11 ` Vivek Goyal
2011-02-25 15:15 ` Vivek Goyal
2011-02-25 16:03 ` Vivek Goyal
2011-02-25 16:09 ` Tejun Heo
2011-02-25 16:19 ` Vivek Goyal
2011-02-25 16:30 ` Vivek Goyal
2011-02-25 16:56 ` Dominik Klein
2011-02-25 19:53 ` Steven Rostedt
2011-02-25 20:18 ` Vivek Goyal
2011-02-26 2:47 ` Steven Rostedt
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox