* [PATCH] Optimize lock in queue unplugging
@ 2008-04-29 19:12 Mikulas Patocka
2008-04-29 19:25 ` Jens Axboe
0 siblings, 1 reply; 11+ messages in thread
From: Mikulas Patocka @ 2008-04-29 19:12 UTC (permalink / raw)
To: Jens Axboe; +Cc: linux-kernel, Mike Anderson, Alasdair Graeme Kergon
Hi
Mike Anderson was doing an OLTP benchmark on a computer with 48 physical
disks mapped to one logical device via device mapper.
He found that there was a slowdown on request_queue->lock in function
generic_unplug_device. The slowdown is caused by the fact that when some
code calls unplug on the device mapper, device mapper calls unplug on all
physical disks. These unplug calls take the lock, find that the queue is
already unplugged, release the lock and exit.
With the below patch, performance of the benchmark was increased by 18%
(the whole OLTP application, not just block layer microbenchmarks).
So I'm submitting this patch for upstream. I think the patch is correct,
because when more threads call simultaneously plug and unplug, it is
unspecified, if the queue is or isn't plugged (so the patch can't make
this worse). And the caller that plugged the queue should unplug it
anyway. (if it doesn't, there's 3ms timeout).
Mikulas
Signed-off-by: Mikulas Patocka <mpatocka@redhat.com>
---
block/blk-core.c | 3 +++
1 file changed, 3 insertions(+)
Index: linux-2.6.25/block/blk-core.c
===================================================================
--- linux-2.6.25.orig/block/blk-core.c 2008-04-17 04:49:44.000000000 +0200
+++ linux-2.6.25/block/blk-core.c 2008-04-29 18:50:37.000000000 +0200
@@ -271,6 +271,9 @@ EXPORT_SYMBOL(__generic_unplug_device);
**/
void generic_unplug_device(struct request_queue *q)
{
+ if (likely(!blk_queue_plugged(q)))
+ return;
+
spin_lock_irq(q->queue_lock);
__generic_unplug_device(q);
spin_unlock_irq(q->queue_lock);
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH] Optimize lock in queue unplugging
2008-04-29 19:12 [PATCH] Optimize lock in queue unplugging Mikulas Patocka
@ 2008-04-29 19:25 ` Jens Axboe
2008-04-29 20:02 ` Mikulas Patocka
2008-04-29 20:29 ` Mike Anderson
0 siblings, 2 replies; 11+ messages in thread
From: Jens Axboe @ 2008-04-29 19:25 UTC (permalink / raw)
To: Mikulas Patocka; +Cc: linux-kernel, Mike Anderson, Alasdair Graeme Kergon
On Tue, Apr 29 2008, Mikulas Patocka wrote:
> Hi
>
> Mike Anderson was doing an OLTP benchmark on a computer with 48 physical
> disks mapped to one logical device via device mapper.
>
> He found that there was a slowdown on request_queue->lock in function
> generic_unplug_device. The slowdown is caused by the fact that when some
> code calls unplug on the device mapper, device mapper calls unplug on all
> physical disks. These unplug calls take the lock, find that the queue is
> already unplugged, release the lock and exit.
>
> With the below patch, performance of the benchmark was increased by 18%
> (the whole OLTP application, not just block layer microbenchmarks).
>
> So I'm submitting this patch for upstream. I think the patch is correct,
> because when more threads call simultaneously plug and unplug, it is
> unspecified, if the queue is or isn't plugged (so the patch can't make
> this worse). And the caller that plugged the queue should unplug it
> anyway. (if it doesn't, there's 3ms timeout).
Where were these unplug calls coming from? The block layer will
generally only unplug when it is already unplugged, so if you are seeing
so many unplug calls that the patch redues overhead by as much
described, perhaps the callsite is buggy?
--
Jens Axboe
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH] Optimize lock in queue unplugging
2008-04-29 19:25 ` Jens Axboe
@ 2008-04-29 20:02 ` Mikulas Patocka
2008-04-29 20:05 ` Jens Axboe
2008-04-29 20:29 ` Mike Anderson
1 sibling, 1 reply; 11+ messages in thread
From: Mikulas Patocka @ 2008-04-29 20:02 UTC (permalink / raw)
To: Jens Axboe; +Cc: linux-kernel, Mike Anderson, Alasdair Graeme Kergon
On Tue, 29 Apr 2008, Jens Axboe wrote:
> On Tue, Apr 29 2008, Mikulas Patocka wrote:
>> Hi
>>
>> Mike Anderson was doing an OLTP benchmark on a computer with 48 physical
>> disks mapped to one logical device via device mapper.
>>
>> He found that there was a slowdown on request_queue->lock in function
>> generic_unplug_device. The slowdown is caused by the fact that when some
>> code calls unplug on the device mapper, device mapper calls unplug on all
>> physical disks. These unplug calls take the lock, find that the queue is
>> already unplugged, release the lock and exit.
>>
>> With the below patch, performance of the benchmark was increased by 18%
>> (the whole OLTP application, not just block layer microbenchmarks).
>>
>> So I'm submitting this patch for upstream. I think the patch is correct,
>> because when more threads call simultaneously plug and unplug, it is
>> unspecified, if the queue is or isn't plugged (so the patch can't make
>> this worse). And the caller that plugged the queue should unplug it
>> anyway. (if it doesn't, there's 3ms timeout).
>
> Where were these unplug calls coming from? The block layer will
> generally only unplug when it is already unplugged, so if you are seeing
> so many unplug calls that the patch redues overhead by as much
> described, perhaps the callsite is buggy?
>
> --
> Jens Axboe
unplug is called on any wait_on_buffer (and similar calls)
__wait_on_buffer -> sync_buffer -> blk_run_address_space ->
blk_run_backing_dev -> bdi->unplug_io_fn(bdi, page);
(I'm not sure that this was the IBM's case, I'm just guessing - this is
the most obvious example where unplug is called repeatedly)
There is not any test that the queue is plugged and there shouldn't be. If
you have this situation
dm-linear(unplugged) -> physical-disk(plugged)
then uplung should be called on dm-linear (that will call dm-unplug method
dm_unplug_all and that will unplug the disk). If you add the test of
plugged queue to the upper layer, you mess this situation with stacked
drivers completely.
The test for already plugged queue should be at the lowest physical device
driver, not in upper layers.
Mikulas
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH] Optimize lock in queue unplugging
2008-04-29 20:02 ` Mikulas Patocka
@ 2008-04-29 20:05 ` Jens Axboe
0 siblings, 0 replies; 11+ messages in thread
From: Jens Axboe @ 2008-04-29 20:05 UTC (permalink / raw)
To: Mikulas Patocka; +Cc: linux-kernel, Mike Anderson, Alasdair Graeme Kergon
On Tue, Apr 29 2008, Mikulas Patocka wrote:
>
>
> On Tue, 29 Apr 2008, Jens Axboe wrote:
>
> >On Tue, Apr 29 2008, Mikulas Patocka wrote:
> >>Hi
> >>
> >>Mike Anderson was doing an OLTP benchmark on a computer with 48 physical
> >>disks mapped to one logical device via device mapper.
> >>
> >>He found that there was a slowdown on request_queue->lock in function
> >>generic_unplug_device. The slowdown is caused by the fact that when some
> >>code calls unplug on the device mapper, device mapper calls unplug on all
> >>physical disks. These unplug calls take the lock, find that the queue is
> >>already unplugged, release the lock and exit.
> >>
> >>With the below patch, performance of the benchmark was increased by 18%
> >>(the whole OLTP application, not just block layer microbenchmarks).
> >>
> >>So I'm submitting this patch for upstream. I think the patch is correct,
> >>because when more threads call simultaneously plug and unplug, it is
> >>unspecified, if the queue is or isn't plugged (so the patch can't make
> >>this worse). And the caller that plugged the queue should unplug it
> >>anyway. (if it doesn't, there's 3ms timeout).
> >
> >Where were these unplug calls coming from? The block layer will
> >generally only unplug when it is already unplugged, so if you are seeing
> >so many unplug calls that the patch redues overhead by as much
> >described, perhaps the callsite is buggy?
> >
> >--
> >Jens Axboe
>
> unplug is called on any wait_on_buffer (and similar calls)
> __wait_on_buffer -> sync_buffer -> blk_run_address_space ->
> blk_run_backing_dev -> bdi->unplug_io_fn(bdi, page);
>
> (I'm not sure that this was the IBM's case, I'm just guessing - this is
> the most obvious example where unplug is called repeatedly)
>
>
> There is not any test that the queue is plugged and there shouldn't be. If
> you have this situation
>
> dm-linear(unplugged) -> physical-disk(plugged)
>
> then uplung should be called on dm-linear (that will call dm-unplug method
> dm_unplug_all and that will unplug the disk). If you add the test of
> plugged queue to the upper layer, you mess this situation with stacked
> drivers completely.
>
> The test for already plugged queue should be at the lowest physical device
> driver, not in upper layers.
Fair enough, I'll put the patch under closer scrutiny and queue it up.
Thanks!
--
Jens Axboe
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH] Optimize lock in queue unplugging
2008-04-29 19:25 ` Jens Axboe
2008-04-29 20:02 ` Mikulas Patocka
@ 2008-04-29 20:29 ` Mike Anderson
2008-04-30 7:14 ` Jens Axboe
1 sibling, 1 reply; 11+ messages in thread
From: Mike Anderson @ 2008-04-29 20:29 UTC (permalink / raw)
To: Jens Axboe; +Cc: Mikulas Patocka, linux-kernel, Alasdair Graeme Kergon
Jens Axboe <jens.axboe@oracle.com> wrote:
> On Tue, Apr 29 2008, Mikulas Patocka wrote:
> > Hi
> >
> > Mike Anderson was doing an OLTP benchmark on a computer with 48 physical
> > disks mapped to one logical device via device mapper.
> >
> > He found that there was a slowdown on request_queue->lock in function
> > generic_unplug_device. The slowdown is caused by the fact that when some
> > code calls unplug on the device mapper, device mapper calls unplug on all
> > physical disks. These unplug calls take the lock, find that the queue is
> > already unplugged, release the lock and exit.
> >
> > With the below patch, performance of the benchmark was increased by 18%
> > (the whole OLTP application, not just block layer microbenchmarks).
> >
> > So I'm submitting this patch for upstream. I think the patch is correct,
> > because when more threads call simultaneously plug and unplug, it is
> > unspecified, if the queue is or isn't plugged (so the patch can't make
> > this worse). And the caller that plugged the queue should unplug it
> > anyway. (if it doesn't, there's 3ms timeout).
>
> Where were these unplug calls coming from? The block layer will
> generally only unplug when it is already unplugged, so if you are seeing
> so many unplug calls that the patch redues overhead by as much
> described, perhaps the callsite is buggy?
I do not have direct access the the benchmark setup, but here is the data
I have received.
The oprofile data was showing ll_rw_blk::generic_unplug_device() as a top
routine at 13% of the samples. Annotation of the samples shows hits on
spin_lock_irq(q->queue_lock).
Here are some sample call traces:
Call trace #1
kernel: [<ffffffff80058c6c>] generic_unplug_device+0x5d/0xc6
kernel: [<ffffffff8820ea3e>] :dm_mod:dm_table_unplug_all+0x33/0x41
kernel: [<ffffffff8820cc85>] :dm_mod:dm_unplug_all+0x1d/0x28
kernel: [<ffffffff8005a78a>] blk_backing_dev_unplug+0x56/0x5b
kernel: [<ffffffff80014cdc>] sync_buffer+0x36/0x3f
kernel: [<ffffffff800629a4>] __wait_on_bit+0x40/0x6f
kernel: [<ffffffff80014ca6>] sync_buffer+0x0/0x3f
kernel: [<ffffffff80062a3f>] out_of_line_wait_on_bit+0x6c/0x78
kernel: [<ffffffff8009c474>] wake_bit_function+0x0/0x23
kernel: [<ffffffff88034c85>] :jbd:journal_commit_transaction+0x91f/0x1086
kernel: [<ffffffff8003d038>] lock_timer_base+0x1b/0x3c
kernel: [<ffffffff8803840e>] :jbd:kjournald+0xc1/0x213
kernel: [<ffffffff8009c446>] autoremove_wake_function+0x0/0x2e
kernel: [<ffffffff8009c283>] keventd_create_kthread+0x0/0x61
kernel: [<ffffffff8803834d>] :jbd:kjournald+0x0/0x213
kernel: [<ffffffff8009c283>] keventd_create_kthread+0x0/0x61
kernel: [<ffffffff800321d5>] kthread+0xfe/0x132
kernel: [<ffffffff8005cfb1>] child_rip+0xa/0x11
kernel: [<ffffffff8009c283>] keventd_create_kthread+0x0/0x61
kernel: [<ffffffff800320d7>] kthread+0x0/0x132
kernel: [<ffffffff8005cfa7>] child_rip+0x0/0x11
Call trace #2
kernel: [<ffffffff80058c6c>] generic_unplug_device+0x5d/0xc6
kernel: [<ffffffff8820ea3e>] :dm_mod:dm_table_unplug_all+0x33/0x41
kernel: [<ffffffff8820cc85>] :dm_mod:dm_unplug_all+0x1d/0x28
kernel: [<ffffffff8005a78a>] blk_backing_dev_unplug+0x56/0x5b
kernel: [<ffffffff800e8bfe>] __blockdev_direct_IO+0x889/0xaa2
kernel: [<ffffffff88050800>] :ext3:ext3_direct_IO+0xf3/0x18b
kernel: [<ffffffff8804ec84>] :ext3:ext3_get_block+0x0/0xe3
kernel: [<ffffffff800be6bb>] generic_file_direct_IO+0xbd/0xfb
kernel: [<ffffffff8001e637>] generic_file_direct_write+0x60/0xf2
kernel: [<ffffffff80015cfd>] __generic_file_aio_write_nolock+0x2b7/0x3b8
kernel: [<ffffffff8002134f>] generic_file_aio_write+0x65/0xc1
kernel: [<ffffffff8804c192>] :ext3:ext3_file_write+0x16/0x91
kernel: [<ffffffff80017944>] do_sync_write+0xc7/0x104
kernel: [<ffffffff8009c446>] autoremove_wake_function+0x0/0x2e
kernel: [<ffffffff80111400>] free_msg+0x22/0x3c
kernel: [<ffffffff800161c4>] vfs_write+0xce/0x174
kernel: [<ffffffff8004194c>] sys_pwrite64+0x50/0x70
kernel: [<ffffffff8005cde9>] error_exit+0x0/0x84
kernel: [<ffffffff8005c116>] system_call+0x7e/0x83
-andmike
--
Michael Anderson
andmike@linux.vnet.ibm.com
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH] Optimize lock in queue unplugging
2008-04-29 20:29 ` Mike Anderson
@ 2008-04-30 7:14 ` Jens Axboe
2008-04-30 10:38 ` Alasdair G Kergon
2008-04-30 13:54 ` Mikulas Patocka
0 siblings, 2 replies; 11+ messages in thread
From: Jens Axboe @ 2008-04-30 7:14 UTC (permalink / raw)
To: Mike Anderson; +Cc: Mikulas Patocka, linux-kernel, Alasdair Graeme Kergon
On Tue, Apr 29 2008, Mike Anderson wrote:
> Jens Axboe <jens.axboe@oracle.com> wrote:
> > On Tue, Apr 29 2008, Mikulas Patocka wrote:
> > > Hi
> > >
> > > Mike Anderson was doing an OLTP benchmark on a computer with 48 physical
> > > disks mapped to one logical device via device mapper.
> > >
> > > He found that there was a slowdown on request_queue->lock in function
> > > generic_unplug_device. The slowdown is caused by the fact that when some
> > > code calls unplug on the device mapper, device mapper calls unplug on all
> > > physical disks. These unplug calls take the lock, find that the queue is
> > > already unplugged, release the lock and exit.
> > >
> > > With the below patch, performance of the benchmark was increased by 18%
> > > (the whole OLTP application, not just block layer microbenchmarks).
> > >
> > > So I'm submitting this patch for upstream. I think the patch is correct,
> > > because when more threads call simultaneously plug and unplug, it is
> > > unspecified, if the queue is or isn't plugged (so the patch can't make
> > > this worse). And the caller that plugged the queue should unplug it
> > > anyway. (if it doesn't, there's 3ms timeout).
> >
> > Where were these unplug calls coming from? The block layer will
> > generally only unplug when it is already unplugged, so if you are seeing
> > so many unplug calls that the patch redues overhead by as much
> > described, perhaps the callsite is buggy?
>
> I do not have direct access the the benchmark setup, but here is the data
> I have received.
>
> The oprofile data was showing ll_rw_blk::generic_unplug_device() as a top
> routine at 13% of the samples. Annotation of the samples shows hits on
> spin_lock_irq(q->queue_lock).
>
> Here are some sample call traces:
>
> Call trace #1
>
> kernel: [<ffffffff80058c6c>] generic_unplug_device+0x5d/0xc6
> kernel: [<ffffffff8820ea3e>] :dm_mod:dm_table_unplug_all+0x33/0x41
> kernel: [<ffffffff8820cc85>] :dm_mod:dm_unplug_all+0x1d/0x28
> kernel: [<ffffffff8005a78a>] blk_backing_dev_unplug+0x56/0x5b
> kernel: [<ffffffff80014cdc>] sync_buffer+0x36/0x3f
> kernel: [<ffffffff800629a4>] __wait_on_bit+0x40/0x6f
> kernel: [<ffffffff80014ca6>] sync_buffer+0x0/0x3f
> kernel: [<ffffffff80062a3f>] out_of_line_wait_on_bit+0x6c/0x78
> kernel: [<ffffffff8009c474>] wake_bit_function+0x0/0x23
> kernel: [<ffffffff88034c85>] :jbd:journal_commit_transaction+0x91f/0x1086
> kernel: [<ffffffff8003d038>] lock_timer_base+0x1b/0x3c
> kernel: [<ffffffff8803840e>] :jbd:kjournald+0xc1/0x213
> kernel: [<ffffffff8009c446>] autoremove_wake_function+0x0/0x2e
> kernel: [<ffffffff8009c283>] keventd_create_kthread+0x0/0x61
> kernel: [<ffffffff8803834d>] :jbd:kjournald+0x0/0x213
> kernel: [<ffffffff8009c283>] keventd_create_kthread+0x0/0x61
> kernel: [<ffffffff800321d5>] kthread+0xfe/0x132
> kernel: [<ffffffff8005cfb1>] child_rip+0xa/0x11
> kernel: [<ffffffff8009c283>] keventd_create_kthread+0x0/0x61
> kernel: [<ffffffff800320d7>] kthread+0x0/0x132
> kernel: [<ffffffff8005cfa7>] child_rip+0x0/0x11
>
> Call trace #2
> kernel: [<ffffffff80058c6c>] generic_unplug_device+0x5d/0xc6
> kernel: [<ffffffff8820ea3e>] :dm_mod:dm_table_unplug_all+0x33/0x41
> kernel: [<ffffffff8820cc85>] :dm_mod:dm_unplug_all+0x1d/0x28
> kernel: [<ffffffff8005a78a>] blk_backing_dev_unplug+0x56/0x5b
> kernel: [<ffffffff800e8bfe>] __blockdev_direct_IO+0x889/0xaa2
> kernel: [<ffffffff88050800>] :ext3:ext3_direct_IO+0xf3/0x18b
> kernel: [<ffffffff8804ec84>] :ext3:ext3_get_block+0x0/0xe3
> kernel: [<ffffffff800be6bb>] generic_file_direct_IO+0xbd/0xfb
> kernel: [<ffffffff8001e637>] generic_file_direct_write+0x60/0xf2
> kernel: [<ffffffff80015cfd>] __generic_file_aio_write_nolock+0x2b7/0x3b8
> kernel: [<ffffffff8002134f>] generic_file_aio_write+0x65/0xc1
> kernel: [<ffffffff8804c192>] :ext3:ext3_file_write+0x16/0x91
> kernel: [<ffffffff80017944>] do_sync_write+0xc7/0x104
> kernel: [<ffffffff8009c446>] autoremove_wake_function+0x0/0x2e
> kernel: [<ffffffff80111400>] free_msg+0x22/0x3c
> kernel: [<ffffffff800161c4>] vfs_write+0xce/0x174
> kernel: [<ffffffff8004194c>] sys_pwrite64+0x50/0x70
> kernel: [<ffffffff8005cde9>] error_exit+0x0/0x84
> kernel: [<ffffffff8005c116>] system_call+0x7e/0x83
So it's basically dm calling into blk_unplug() all the time, which
doesn't check if the queue is plugged. The reason why I didn't like the
initial patch is that ->unplug_fn() really should not be called unless
the queue IS plugged. So how about this instead:
http://git.kernel.dk/?p=linux-2.6-block.git;a=commit;h=c44993018887e82abd49023e92e8d8b6000e03ed
That's a lot more appropriate, imho.
--
Jens Axboe
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH] Optimize lock in queue unplugging
2008-04-30 7:14 ` Jens Axboe
@ 2008-04-30 10:38 ` Alasdair G Kergon
2008-04-30 13:54 ` Mikulas Patocka
1 sibling, 0 replies; 11+ messages in thread
From: Alasdair G Kergon @ 2008-04-30 10:38 UTC (permalink / raw)
To: Jens Axboe; +Cc: Mike Anderson, Mikulas Patocka, linux-kernel
On Wed, Apr 30, 2008 at 09:14:15AM +0200, Jens Axboe wrote:
> So it's basically dm calling into blk_unplug() all the time, which
> doesn't check if the queue is plugged. The reason why I didn't like the
> initial patch is that ->unplug_fn() really should not be called unless
> the queue IS plugged. So how about this instead:
>
> http://git.kernel.dk/?p=linux-2.6-block.git;a=commit;h=c44993018887e82abd49023e92e8d8b6000e03ed
>
> That's a lot more appropriate, imho.
Makes sense to me, ack.
Alasdair
--
agk@redhat.com
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH] Optimize lock in queue unplugging
2008-04-30 7:14 ` Jens Axboe
2008-04-30 10:38 ` Alasdair G Kergon
@ 2008-04-30 13:54 ` Mikulas Patocka
2008-05-04 19:11 ` Jens Axboe
1 sibling, 1 reply; 11+ messages in thread
From: Mikulas Patocka @ 2008-04-30 13:54 UTC (permalink / raw)
To: Jens Axboe; +Cc: Mike Anderson, linux-kernel, Alasdair Graeme Kergon
On Wed, 30 Apr 2008, Jens Axboe wrote:
> On Tue, Apr 29 2008, Mike Anderson wrote:
>> Jens Axboe <jens.axboe@oracle.com> wrote:
>>> On Tue, Apr 29 2008, Mikulas Patocka wrote:
>>>> Hi
>>>>
>>>> Mike Anderson was doing an OLTP benchmark on a computer with 48 physical
>>>> disks mapped to one logical device via device mapper.
>>>>
>>>> He found that there was a slowdown on request_queue->lock in function
>>>> generic_unplug_device. The slowdown is caused by the fact that when some
>>>> code calls unplug on the device mapper, device mapper calls unplug on all
>>>> physical disks. These unplug calls take the lock, find that the queue is
>>>> already unplugged, release the lock and exit.
>>>>
>>>> With the below patch, performance of the benchmark was increased by 18%
>>>> (the whole OLTP application, not just block layer microbenchmarks).
>>>>
>>>> So I'm submitting this patch for upstream. I think the patch is correct,
>>>> because when more threads call simultaneously plug and unplug, it is
>>>> unspecified, if the queue is or isn't plugged (so the patch can't make
>>>> this worse). And the caller that plugged the queue should unplug it
>>>> anyway. (if it doesn't, there's 3ms timeout).
>>>
>>> Where were these unplug calls coming from? The block layer will
>>> generally only unplug when it is already unplugged, so if you are seeing
>>> so many unplug calls that the patch redues overhead by as much
>>> described, perhaps the callsite is buggy?
>>
>> I do not have direct access the the benchmark setup, but here is the data
>> I have received.
>>
>> The oprofile data was showing ll_rw_blk::generic_unplug_device() as a top
>> routine at 13% of the samples. Annotation of the samples shows hits on
>> spin_lock_irq(q->queue_lock).
>>
>> Here are some sample call traces:
>>
>> Call trace #1
>>
>> kernel: [<ffffffff80058c6c>] generic_unplug_device+0x5d/0xc6
>> kernel: [<ffffffff8820ea3e>] :dm_mod:dm_table_unplug_all+0x33/0x41
>> kernel: [<ffffffff8820cc85>] :dm_mod:dm_unplug_all+0x1d/0x28
>> kernel: [<ffffffff8005a78a>] blk_backing_dev_unplug+0x56/0x5b
>> kernel: [<ffffffff80014cdc>] sync_buffer+0x36/0x3f
>> kernel: [<ffffffff800629a4>] __wait_on_bit+0x40/0x6f
>> kernel: [<ffffffff80014ca6>] sync_buffer+0x0/0x3f
>> kernel: [<ffffffff80062a3f>] out_of_line_wait_on_bit+0x6c/0x78
>> kernel: [<ffffffff8009c474>] wake_bit_function+0x0/0x23
>> kernel: [<ffffffff88034c85>] :jbd:journal_commit_transaction+0x91f/0x1086
>> kernel: [<ffffffff8003d038>] lock_timer_base+0x1b/0x3c
>> kernel: [<ffffffff8803840e>] :jbd:kjournald+0xc1/0x213
>> kernel: [<ffffffff8009c446>] autoremove_wake_function+0x0/0x2e
>> kernel: [<ffffffff8009c283>] keventd_create_kthread+0x0/0x61
>> kernel: [<ffffffff8803834d>] :jbd:kjournald+0x0/0x213
>> kernel: [<ffffffff8009c283>] keventd_create_kthread+0x0/0x61
>> kernel: [<ffffffff800321d5>] kthread+0xfe/0x132
>> kernel: [<ffffffff8005cfb1>] child_rip+0xa/0x11
>> kernel: [<ffffffff8009c283>] keventd_create_kthread+0x0/0x61
>> kernel: [<ffffffff800320d7>] kthread+0x0/0x132
>> kernel: [<ffffffff8005cfa7>] child_rip+0x0/0x11
>>
>> Call trace #2
>> kernel: [<ffffffff80058c6c>] generic_unplug_device+0x5d/0xc6
>> kernel: [<ffffffff8820ea3e>] :dm_mod:dm_table_unplug_all+0x33/0x41
>> kernel: [<ffffffff8820cc85>] :dm_mod:dm_unplug_all+0x1d/0x28
>> kernel: [<ffffffff8005a78a>] blk_backing_dev_unplug+0x56/0x5b
>> kernel: [<ffffffff800e8bfe>] __blockdev_direct_IO+0x889/0xaa2
>> kernel: [<ffffffff88050800>] :ext3:ext3_direct_IO+0xf3/0x18b
>> kernel: [<ffffffff8804ec84>] :ext3:ext3_get_block+0x0/0xe3
>> kernel: [<ffffffff800be6bb>] generic_file_direct_IO+0xbd/0xfb
>> kernel: [<ffffffff8001e637>] generic_file_direct_write+0x60/0xf2
>> kernel: [<ffffffff80015cfd>] __generic_file_aio_write_nolock+0x2b7/0x3b8
>> kernel: [<ffffffff8002134f>] generic_file_aio_write+0x65/0xc1
>> kernel: [<ffffffff8804c192>] :ext3:ext3_file_write+0x16/0x91
>> kernel: [<ffffffff80017944>] do_sync_write+0xc7/0x104
>> kernel: [<ffffffff8009c446>] autoremove_wake_function+0x0/0x2e
>> kernel: [<ffffffff80111400>] free_msg+0x22/0x3c
>> kernel: [<ffffffff800161c4>] vfs_write+0xce/0x174
>> kernel: [<ffffffff8004194c>] sys_pwrite64+0x50/0x70
>> kernel: [<ffffffff8005cde9>] error_exit+0x0/0x84
>> kernel: [<ffffffff8005c116>] system_call+0x7e/0x83
>
> So it's basically dm calling into blk_unplug() all the time, which
> doesn't check if the queue is plugged. The reason why I didn't like the
> initial patch is that ->unplug_fn() really should not be called unless
> the queue IS plugged. So how about this instead:
>
> http://git.kernel.dk/?p=linux-2.6-block.git;a=commit;h=c44993018887e82abd49023e92e8d8b6000e03ed
>
> That's a lot more appropriate, imho.
>
> --
> Jens Axboe
This doesn't seem correct to me. The difference between blk_unplug and
generic_unplug_device is that blk_unplug is called on every type of device
and generic_unplug_device (pointed to by q->unplug_fn) is a method that is
called on low-level disk devices.
dm and md redefine q->unplug_fn to point to their own method. On dm and
md, blk_unplug is called, but generic_unplug_device is not.
So if you have this setup
dm-linear(unplugged) -> disk(plugged)
then, with your patch, a call to blk_unplug(dm-linear) will not unplug the
disk. With my patch, a call to blk_unplug(dm-linear) will unplug the disk
--- it calls q->unplug_fn that points to dm_unplug_all, that calls
blk_unplug again on the disk and that calls generic_unplug_device on disk
queue.
Mikulas
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH] Optimize lock in queue unplugging
2008-04-30 13:54 ` Mikulas Patocka
@ 2008-05-04 19:11 ` Jens Axboe
2008-05-05 4:01 ` Mikulas Patocka
0 siblings, 1 reply; 11+ messages in thread
From: Jens Axboe @ 2008-05-04 19:11 UTC (permalink / raw)
To: Mikulas Patocka; +Cc: Mike Anderson, linux-kernel, Alasdair Graeme Kergon
On Wed, Apr 30 2008, Mikulas Patocka wrote:
>
>
> On Wed, 30 Apr 2008, Jens Axboe wrote:
>
> >On Tue, Apr 29 2008, Mike Anderson wrote:
> >>Jens Axboe <jens.axboe@oracle.com> wrote:
> >>>On Tue, Apr 29 2008, Mikulas Patocka wrote:
> >>>>Hi
> >>>>
> >>>>Mike Anderson was doing an OLTP benchmark on a computer with 48 physical
> >>>>disks mapped to one logical device via device mapper.
> >>>>
> >>>>He found that there was a slowdown on request_queue->lock in function
> >>>>generic_unplug_device. The slowdown is caused by the fact that when some
> >>>>code calls unplug on the device mapper, device mapper calls unplug on
> >>>>all
> >>>>physical disks. These unplug calls take the lock, find that the queue is
> >>>>already unplugged, release the lock and exit.
> >>>>
> >>>>With the below patch, performance of the benchmark was increased by 18%
> >>>>(the whole OLTP application, not just block layer microbenchmarks).
> >>>>
> >>>>So I'm submitting this patch for upstream. I think the patch is correct,
> >>>>because when more threads call simultaneously plug and unplug, it is
> >>>>unspecified, if the queue is or isn't plugged (so the patch can't make
> >>>>this worse). And the caller that plugged the queue should unplug it
> >>>>anyway. (if it doesn't, there's 3ms timeout).
> >>>
> >>>Where were these unplug calls coming from? The block layer will
> >>>generally only unplug when it is already unplugged, so if you are seeing
> >>>so many unplug calls that the patch redues overhead by as much
> >>>described, perhaps the callsite is buggy?
> >>
> >>I do not have direct access the the benchmark setup, but here is the data
> >>I have received.
> >>
> >>The oprofile data was showing ll_rw_blk::generic_unplug_device() as a top
> >>routine at 13% of the samples. Annotation of the samples shows hits on
> >>spin_lock_irq(q->queue_lock).
> >>
> >>Here are some sample call traces:
> >>
> >>Call trace #1
> >>
> >>kernel: [<ffffffff80058c6c>] generic_unplug_device+0x5d/0xc6
> >>kernel: [<ffffffff8820ea3e>] :dm_mod:dm_table_unplug_all+0x33/0x41
> >>kernel: [<ffffffff8820cc85>] :dm_mod:dm_unplug_all+0x1d/0x28
> >>kernel: [<ffffffff8005a78a>] blk_backing_dev_unplug+0x56/0x5b
> >>kernel: [<ffffffff80014cdc>] sync_buffer+0x36/0x3f
> >>kernel: [<ffffffff800629a4>] __wait_on_bit+0x40/0x6f
> >>kernel: [<ffffffff80014ca6>] sync_buffer+0x0/0x3f
> >>kernel: [<ffffffff80062a3f>] out_of_line_wait_on_bit+0x6c/0x78
> >>kernel: [<ffffffff8009c474>] wake_bit_function+0x0/0x23
> >>kernel: [<ffffffff88034c85>] :jbd:journal_commit_transaction+0x91f/0x1086
> >>kernel: [<ffffffff8003d038>] lock_timer_base+0x1b/0x3c
> >>kernel: [<ffffffff8803840e>] :jbd:kjournald+0xc1/0x213
> >>kernel: [<ffffffff8009c446>] autoremove_wake_function+0x0/0x2e
> >>kernel: [<ffffffff8009c283>] keventd_create_kthread+0x0/0x61
> >>kernel: [<ffffffff8803834d>] :jbd:kjournald+0x0/0x213
> >>kernel: [<ffffffff8009c283>] keventd_create_kthread+0x0/0x61
> >>kernel: [<ffffffff800321d5>] kthread+0xfe/0x132
> >>kernel: [<ffffffff8005cfb1>] child_rip+0xa/0x11
> >>kernel: [<ffffffff8009c283>] keventd_create_kthread+0x0/0x61
> >>kernel: [<ffffffff800320d7>] kthread+0x0/0x132
> >>kernel: [<ffffffff8005cfa7>] child_rip+0x0/0x11
> >>
> >>Call trace #2
> >>kernel: [<ffffffff80058c6c>] generic_unplug_device+0x5d/0xc6
> >>kernel: [<ffffffff8820ea3e>] :dm_mod:dm_table_unplug_all+0x33/0x41
> >>kernel: [<ffffffff8820cc85>] :dm_mod:dm_unplug_all+0x1d/0x28
> >>kernel: [<ffffffff8005a78a>] blk_backing_dev_unplug+0x56/0x5b
> >>kernel: [<ffffffff800e8bfe>] __blockdev_direct_IO+0x889/0xaa2
> >>kernel: [<ffffffff88050800>] :ext3:ext3_direct_IO+0xf3/0x18b
> >>kernel: [<ffffffff8804ec84>] :ext3:ext3_get_block+0x0/0xe3
> >>kernel: [<ffffffff800be6bb>] generic_file_direct_IO+0xbd/0xfb
> >>kernel: [<ffffffff8001e637>] generic_file_direct_write+0x60/0xf2
> >>kernel: [<ffffffff80015cfd>] __generic_file_aio_write_nolock+0x2b7/0x3b8
> >>kernel: [<ffffffff8002134f>] generic_file_aio_write+0x65/0xc1
> >>kernel: [<ffffffff8804c192>] :ext3:ext3_file_write+0x16/0x91
> >>kernel: [<ffffffff80017944>] do_sync_write+0xc7/0x104
> >>kernel: [<ffffffff8009c446>] autoremove_wake_function+0x0/0x2e
> >>kernel: [<ffffffff80111400>] free_msg+0x22/0x3c
> >>kernel: [<ffffffff800161c4>] vfs_write+0xce/0x174
> >>kernel: [<ffffffff8004194c>] sys_pwrite64+0x50/0x70
> >>kernel: [<ffffffff8005cde9>] error_exit+0x0/0x84
> >>kernel: [<ffffffff8005c116>] system_call+0x7e/0x83
> >
> >So it's basically dm calling into blk_unplug() all the time, which
> >doesn't check if the queue is plugged. The reason why I didn't like the
> >initial patch is that ->unplug_fn() really should not be called unless
> >the queue IS plugged. So how about this instead:
> >
> >http://git.kernel.dk/?p=linux-2.6-block.git;a=commit;h=c44993018887e82abd49023e92e8d8b6000e03ed
> >
> >That's a lot more appropriate, imho.
> >
> >--
> >Jens Axboe
>
> This doesn't seem correct to me. The difference between blk_unplug and
> generic_unplug_device is that blk_unplug is called on every type of device
> and generic_unplug_device (pointed to by q->unplug_fn) is a method that is
> called on low-level disk devices.
>
> dm and md redefine q->unplug_fn to point to their own method. On dm and
> md, blk_unplug is called, but generic_unplug_device is not.
>
> So if you have this setup
> dm-linear(unplugged) -> disk(plugged)
>
> then, with your patch, a call to blk_unplug(dm-linear) will not unplug the
> disk. With my patch, a call to blk_unplug(dm-linear) will unplug the disk
> --- it calls q->unplug_fn that points to dm_unplug_all, that calls
> blk_unplug again on the disk and that calls generic_unplug_device on disk
> queue.
That is because the md/dm don't set the plugged flag which I think they
should. So we fix that instead so that plugging works the same from the
block core or from a driver instead of adding work-arounds in the block
unplug handler. Adding a check for plugged in the plug handler is a
hack, I don't see how you can argue against that.
--
Jens Axboe
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH] Optimize lock in queue unplugging
2008-05-04 19:11 ` Jens Axboe
@ 2008-05-05 4:01 ` Mikulas Patocka
2008-05-07 7:45 ` Jens Axboe
0 siblings, 1 reply; 11+ messages in thread
From: Mikulas Patocka @ 2008-05-05 4:01 UTC (permalink / raw)
To: Jens Axboe; +Cc: Mike Anderson, linux-kernel, Alasdair Graeme Kergon
>>> So it's basically dm calling into blk_unplug() all the time, which
>>> doesn't check if the queue is plugged. The reason why I didn't like the
>>> initial patch is that ->unplug_fn() really should not be called unless
>>> the queue IS plugged. So how about this instead:
>>>
>>> http://git.kernel.dk/?p=linux-2.6-block.git;a=commit;h=c44993018887e82abd49023e92e8d8b6000e03ed
>>>
>>> That's a lot more appropriate, imho.
>>>
>>> --
>>> Jens Axboe
>>
>> This doesn't seem correct to me. The difference between blk_unplug and
>> generic_unplug_device is that blk_unplug is called on every type of device
>> and generic_unplug_device (pointed to by q->unplug_fn) is a method that is
>> called on low-level disk devices.
>>
>> dm and md redefine q->unplug_fn to point to their own method. On dm and
>> md, blk_unplug is called, but generic_unplug_device is not.
>>
>> So if you have this setup
>> dm-linear(unplugged) -> disk(plugged)
>>
>> then, with your patch, a call to blk_unplug(dm-linear) will not unplug the
>> disk. With my patch, a call to blk_unplug(dm-linear) will unplug the disk
>> --- it calls q->unplug_fn that points to dm_unplug_all, that calls
>> blk_unplug again on the disk and that calls generic_unplug_device on disk
>> queue.
>
> That is because the md/dm don't set the plugged flag which I think they
> should. So we fix that instead so that plugging works the same from the
> block core or from a driver instead of adding work-arounds in the block
> unplug handler.
When should dm/md set the plugged flag? It has several disks (or more
dm/md layers), some of them may be plugged, some not --- furthermore, the
disk queues are shared by other devices --- there may be more devices on
different partitions.
So the question: when do you want dm/md to set the plugged bit? Do you
really want to plug the queue at the top layer and merge requests there?
> Adding a check for plugged in the plug handler is a
> hack, I don't see how you can argue against that.
I changed generic_unplug_device from:
spin_lock()
if (plugged bit is set) unplug...;
spin_unlock()
to:
if (plugged bit is set)
{
spin_lock()
if (plugged bit is set) unplug...;
spin_unlock()
}
--- I don't see anything wrong with it. At least, the change is so trivial
that we can be sure that it won't have negative effect on performance.
What you propose is complete rewrite of dm/md plugging mechanism to plug
the queue and merge requests at upper layer --- the questions are:
- what exactly you are proposing? (plugging at upper layer? lower layer?
both layers? don't plug and just somehow propagate the plugged bit?)
- why are you proposing that?
Note that for some dm targets it would be benefical to join the requests
at upper layer (dm-linear, raid1), for others (raid0, dm-snapshots) it
damages performance (you merge the requests before passing them to raid0
and you chop them again to smaller pieces in raid0).
Mikulas
> --
> Jens Axboe
>
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH] Optimize lock in queue unplugging
2008-05-05 4:01 ` Mikulas Patocka
@ 2008-05-07 7:45 ` Jens Axboe
0 siblings, 0 replies; 11+ messages in thread
From: Jens Axboe @ 2008-05-07 7:45 UTC (permalink / raw)
To: Mikulas Patocka; +Cc: Mike Anderson, linux-kernel, Alasdair Graeme Kergon
On Mon, May 05 2008, Mikulas Patocka wrote:
> >>>So it's basically dm calling into blk_unplug() all the time, which
> >>>doesn't check if the queue is plugged. The reason why I didn't like the
> >>>initial patch is that ->unplug_fn() really should not be called unless
> >>>the queue IS plugged. So how about this instead:
> >>>
> >>>http://git.kernel.dk/?p=linux-2.6-block.git;a=commit;h=c44993018887e82abd49023e92e8d8b6000e03ed
> >>>
> >>>That's a lot more appropriate, imho.
> >>>
> >>>--
> >>>Jens Axboe
> >>
> >>This doesn't seem correct to me. The difference between blk_unplug and
> >>generic_unplug_device is that blk_unplug is called on every type of device
> >>and generic_unplug_device (pointed to by q->unplug_fn) is a method that is
> >>called on low-level disk devices.
> >>
> >>dm and md redefine q->unplug_fn to point to their own method. On dm and
> >>md, blk_unplug is called, but generic_unplug_device is not.
> >>
> >>So if you have this setup
> >>dm-linear(unplugged) -> disk(plugged)
> >>
> >>then, with your patch, a call to blk_unplug(dm-linear) will not unplug the
> >>disk. With my patch, a call to blk_unplug(dm-linear) will unplug the disk
> >>--- it calls q->unplug_fn that points to dm_unplug_all, that calls
> >>blk_unplug again on the disk and that calls generic_unplug_device on disk
> >>queue.
> >
> >That is because the md/dm don't set the plugged flag which I think they
> >should. So we fix that instead so that plugging works the same from the
> >block core or from a driver instead of adding work-arounds in the block
> >unplug handler.
>
> When should dm/md set the plugged flag? It has several disks (or more
> dm/md layers), some of them may be plugged, some not --- furthermore, the
> disk queues are shared by other devices --- there may be more devices on
> different partitions.
>
> So the question: when do you want dm/md to set the plugged bit? Do you
> really want to plug the queue at the top layer and merge requests there?
>
> >Adding a check for plugged in the plug handler is a
> >hack, I don't see how you can argue against that.
>
> I changed generic_unplug_device from:
> spin_lock()
> if (plugged bit is set) unplug...;
> spin_unlock()
>
> to:
> if (plugged bit is set)
> {
> spin_lock()
> if (plugged bit is set) unplug...;
> spin_unlock()
> }
>
> --- I don't see anything wrong with it. At least, the change is so trivial
> that we can be sure that it won't have negative effect on performance.
>
> What you propose is complete rewrite of dm/md plugging mechanism to plug
> the queue and merge requests at upper layer --- the questions are:
>
> - what exactly you are proposing? (plugging at upper layer? lower layer?
> both layers? don't plug and just somehow propagate the plugged bit?)
> - why are you proposing that?
>
> Note that for some dm targets it would be benefical to join the requests
> at upper layer (dm-linear, raid1), for others (raid0, dm-snapshots) it
> damages performance (you merge the requests before passing them to raid0
> and you chop them again to smaller pieces in raid0).
It will indeed get a lot more involved. Sigh. Well I'll apply your patch
to generic_unplug_device(), it's at least a worth while optimization in
the mean time.
--
Jens Axboe
^ permalink raw reply [flat|nested] 11+ messages in thread
end of thread, other threads:[~2008-05-07 7:46 UTC | newest]
Thread overview: 11+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2008-04-29 19:12 [PATCH] Optimize lock in queue unplugging Mikulas Patocka
2008-04-29 19:25 ` Jens Axboe
2008-04-29 20:02 ` Mikulas Patocka
2008-04-29 20:05 ` Jens Axboe
2008-04-29 20:29 ` Mike Anderson
2008-04-30 7:14 ` Jens Axboe
2008-04-30 10:38 ` Alasdair G Kergon
2008-04-30 13:54 ` Mikulas Patocka
2008-05-04 19:11 ` Jens Axboe
2008-05-05 4:01 ` Mikulas Patocka
2008-05-07 7:45 ` Jens Axboe
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox