* Use of spinlock after free with CFQ scheduler
@ 2006-06-13 15:37 Thomas Petazzoni
2006-06-14 5:19 ` Jens Axboe
0 siblings, 1 reply; 2+ messages in thread
From: Thomas Petazzoni @ 2006-06-13 15:37 UTC (permalink / raw)
To: axboe; +Cc: linux-kernel
Hi,
While developing a block device driver, we stumbled upon the kernel
panic reported at
http://www.ussg.iu.edu/hypermail/linux/kernel/0512.3/0297.html.
According to the mail and your answer, it seems that the CFQ scheduler
uses the queue lock after blk_cleanup_queue(). At this time, the
spinlock might have been freed. I can confirm that the bug doesn't
appear with other I/O schedulers.
However, the proposed fix for "ub" looks quite strange to me. It uses
a static array of spinlocks, so that they remain in memory after
blk_cleanup_queue(). However, "ub" can be compiled as a module, so I
don't see what prevent the use of the queue spinlocks by the CFQ
scheduler once the module has been unloaded. I do not understand how the
provided patch correctly fixes the bug.
The bug was reported on a pre-2.6.15 kernel, but we're still seeing
this bug with a 2.6.16 FedoraCore-hacked kernel.
To me, the bug seems to be in the CFQ scheduler itself, isn't it ?
Maybe we should use the internal queue lock (by passing NULL as the
lock parameter to the blk_init_queue() call), and then modify the CFQ
scheduler so that it correctly increments/decrements the queue->refcnt ?
What do you think about it ?
Thanks!
Thomas
--
Thomas Petazzoni - thomas.petazzoni@enix.org
http://{thomas,sos,kos}.enix.org - http://www.toulibre.org
http://www.{livret,agenda}dulibre.org
^ permalink raw reply [flat|nested] 2+ messages in thread* Re: Use of spinlock after free with CFQ scheduler
2006-06-13 15:37 Use of spinlock after free with CFQ scheduler Thomas Petazzoni
@ 2006-06-14 5:19 ` Jens Axboe
0 siblings, 0 replies; 2+ messages in thread
From: Jens Axboe @ 2006-06-14 5:19 UTC (permalink / raw)
To: Thomas Petazzoni; +Cc: linux-kernel
On Tue, Jun 13 2006, Thomas Petazzoni wrote:
> Hi,
>
> While developing a block device driver, we stumbled upon the kernel
> panic reported at
> http://www.ussg.iu.edu/hypermail/linux/kernel/0512.3/0297.html.
> According to the mail and your answer, it seems that the CFQ scheduler
> uses the queue lock after blk_cleanup_queue(). At this time, the
> spinlock might have been freed. I can confirm that the bug doesn't
> appear with other I/O schedulers.
>
> However, the proposed fix for "ub" looks quite strange to me. It uses
> a static array of spinlocks, so that they remain in memory after
> blk_cleanup_queue(). However, "ub" can be compiled as a module, so I
> don't see what prevent the use of the queue spinlocks by the CFQ
> scheduler once the module has been unloaded. I do not understand how the
> provided patch correctly fixes the bug.
You must not be able to remove the module, while CFQ still has
references to the locks around. The problem observed initially with ub
is that it doesn't honor queue reference counting - it embeds a queue
lock inside some structure, that it will free eg on device removal. The
lock associated with the queue obviously needs to follow the same life
cycle as the queue.
> The bug was reported on a pre-2.6.15 kernel, but we're still seeing
> this bug with a 2.6.16 FedoraCore-hacked kernel.
>
> To me, the bug seems to be in the CFQ scheduler itself, isn't it ?
> Maybe we should use the internal queue lock (by passing NULL as the
> lock parameter to the blk_init_queue() call), and then modify the CFQ
> scheduler so that it correctly increments/decrements the queue->refcnt ?
Where do you see a bug in the CFQ inc/dec of the queue reference count?
You can give 2.6.17-rcX (X == latest) a spin and see if that changes
anything for you.
--
Jens Axboe
^ permalink raw reply [flat|nested] 2+ messages in thread
end of thread, other threads:[~2006-06-14 5:18 UTC | newest]
Thread overview: 2+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2006-06-13 15:37 Use of spinlock after free with CFQ scheduler Thomas Petazzoni
2006-06-14 5:19 ` Jens Axboe
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox