* [Qemu-devel] [PATCH v3 1/2] block-backend: Introduce blk_drain()
2015-06-17 10:37 [Qemu-devel] [PATCH v3 0/2] Fix slow startup with many disks Alexander Yarygin
@ 2015-06-17 10:37 ` Alexander Yarygin
2015-06-17 10:37 ` [Qemu-devel] [PATCH v3 2/2] virtio-blk: Use blk_drain() to drain IO requests Alexander Yarygin
2015-06-18 15:53 ` [Qemu-devel] [PATCH v3 0/2] Fix slow startup with many disks Stefan Hajnoczi
2 siblings, 0 replies; 10+ messages in thread
From: Alexander Yarygin @ 2015-06-17 10:37 UTC (permalink / raw)
To: qemu-devel
Cc: Kevin Wolf, qemu-block, Alexander Yarygin, Ekaterina Tumanova,
Christian Borntraeger, Stefan Hajnoczi, Cornelia Huck,
Paolo Bonzini
This patch introduces the blk_drain() function which allows to replace
blk_drain_all() when only one BlockDriverState needs to be drained.
Cc: Christian Borntraeger <borntraeger@de.ibm.com>
Cc: Cornelia Huck <cornelia.huck@de.ibm.com>
Cc: Kevin Wolf <kwolf@redhat.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Alexander Yarygin <yarygin@linux.vnet.ibm.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Acked-by: Kevin Wolf <kwolf@redhat.com>
---
block/block-backend.c | 5 +++++
include/sysemu/block-backend.h | 1 +
2 files changed, 6 insertions(+)
diff --git a/block/block-backend.c b/block/block-backend.c
index 93e46f3..aee8a12 100644
--- a/block/block-backend.c
+++ b/block/block-backend.c
@@ -700,6 +700,11 @@ int blk_flush_all(void)
return bdrv_flush_all();
}
+void blk_drain(BlockBackend *blk)
+{
+ bdrv_drain(blk->bs);
+}
+
void blk_drain_all(void)
{
bdrv_drain_all();
diff --git a/include/sysemu/block-backend.h b/include/sysemu/block-backend.h
index b4a4d5e..8fc960f 100644
--- a/include/sysemu/block-backend.h
+++ b/include/sysemu/block-backend.h
@@ -118,6 +118,7 @@ int blk_co_discard(BlockBackend *blk, int64_t sector_num, int nb_sectors);
int blk_co_flush(BlockBackend *blk);
int blk_flush(BlockBackend *blk);
int blk_flush_all(void);
+void blk_drain(BlockBackend *blk);
void blk_drain_all(void);
BlockdevOnError blk_get_on_error(BlockBackend *blk, bool is_read);
BlockErrorAction blk_get_error_action(BlockBackend *blk, bool is_read,
--
1.9.1
^ permalink raw reply related [flat|nested] 10+ messages in thread
* [Qemu-devel] [PATCH v3 2/2] virtio-blk: Use blk_drain() to drain IO requests
2015-06-17 10:37 [Qemu-devel] [PATCH v3 0/2] Fix slow startup with many disks Alexander Yarygin
2015-06-17 10:37 ` [Qemu-devel] [PATCH v3 1/2] block-backend: Introduce blk_drain() Alexander Yarygin
@ 2015-06-17 10:37 ` Alexander Yarygin
2015-06-26 9:26 ` Markus Armbruster
2015-06-18 15:53 ` [Qemu-devel] [PATCH v3 0/2] Fix slow startup with many disks Stefan Hajnoczi
2 siblings, 1 reply; 10+ messages in thread
From: Alexander Yarygin @ 2015-06-17 10:37 UTC (permalink / raw)
To: qemu-devel
Cc: Kevin Wolf, qemu-block, Alexander Yarygin, Ekaterina Tumanova,
Michael S. Tsirkin, Christian Borntraeger, Stefan Hajnoczi,
Cornelia Huck, Paolo Bonzini
Each call of the virtio_blk_reset() function calls blk_drain_all(),
which works for all existing BlockDriverStates, while draining only
one is needed.
This patch replaces blk_drain_all() by blk_drain() in
virtio_blk_reset(). virtio_blk_data_plane_stop() should be called
after draining because it restores vblk->complete_request.
Cc: "Michael S. Tsirkin" <mst@redhat.com>
Cc: Christian Borntraeger <borntraeger@de.ibm.com>
Cc: Cornelia Huck <cornelia.huck@de.ibm.com>
Cc: Kevin Wolf <kwolf@redhat.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Alexander Yarygin <yarygin@linux.vnet.ibm.com>
---
hw/block/virtio-blk.c | 15 ++++++++++-----
1 file changed, 10 insertions(+), 5 deletions(-)
diff --git a/hw/block/virtio-blk.c b/hw/block/virtio-blk.c
index e6afe97..d8a906f 100644
--- a/hw/block/virtio-blk.c
+++ b/hw/block/virtio-blk.c
@@ -651,16 +651,21 @@ static void virtio_blk_dma_restart_cb(void *opaque, int running,
static void virtio_blk_reset(VirtIODevice *vdev)
{
VirtIOBlock *s = VIRTIO_BLK(vdev);
-
- if (s->dataplane) {
- virtio_blk_data_plane_stop(s->dataplane);
- }
+ AioContext *ctx;
/*
* This should cancel pending requests, but can't do nicely until there
* are per-device request lists.
*/
- blk_drain_all();
+ ctx = blk_get_aio_context(s->blk);
+ aio_context_acquire(ctx);
+ blk_drain(s->blk);
+
+ if (s->dataplane) {
+ virtio_blk_data_plane_stop(s->dataplane);
+ }
+ aio_context_release(ctx);
+
blk_set_enable_write_cache(s->blk, s->original_wce);
}
--
1.9.1
^ permalink raw reply related [flat|nested] 10+ messages in thread
* Re: [Qemu-devel] [PATCH v3 2/2] virtio-blk: Use blk_drain() to drain IO requests
2015-06-17 10:37 ` [Qemu-devel] [PATCH v3 2/2] virtio-blk: Use blk_drain() to drain IO requests Alexander Yarygin
@ 2015-06-26 9:26 ` Markus Armbruster
2015-06-26 14:00 ` Alexander Yarygin
0 siblings, 1 reply; 10+ messages in thread
From: Markus Armbruster @ 2015-06-26 9:26 UTC (permalink / raw)
To: Alexander Yarygin
Cc: Kevin Wolf, qemu-block, Michael S. Tsirkin, Ekaterina Tumanova,
qemu-devel, Christian Borntraeger, Stefan Hajnoczi, Cornelia Huck,
Paolo Bonzini
Just spotted this in my git-pull...
Alexander Yarygin <yarygin@linux.vnet.ibm.com> writes:
> Each call of the virtio_blk_reset() function calls blk_drain_all(),
> which works for all existing BlockDriverStates, while draining only
> one is needed.
>
> This patch replaces blk_drain_all() by blk_drain() in
> virtio_blk_reset(). virtio_blk_data_plane_stop() should be called
> after draining because it restores vblk->complete_request.
>
> Cc: "Michael S. Tsirkin" <mst@redhat.com>
> Cc: Christian Borntraeger <borntraeger@de.ibm.com>
> Cc: Cornelia Huck <cornelia.huck@de.ibm.com>
> Cc: Kevin Wolf <kwolf@redhat.com>
> Cc: Paolo Bonzini <pbonzini@redhat.com>
> Cc: Stefan Hajnoczi <stefanha@redhat.com>
> Signed-off-by: Alexander Yarygin <yarygin@linux.vnet.ibm.com>
> ---
> hw/block/virtio-blk.c | 15 ++++++++++-----
> 1 file changed, 10 insertions(+), 5 deletions(-)
>
> diff --git a/hw/block/virtio-blk.c b/hw/block/virtio-blk.c
> index e6afe97..d8a906f 100644
> --- a/hw/block/virtio-blk.c
> +++ b/hw/block/virtio-blk.c
> @@ -651,16 +651,21 @@ static void virtio_blk_dma_restart_cb(void *opaque, int running,
> static void virtio_blk_reset(VirtIODevice *vdev)
> {
> VirtIOBlock *s = VIRTIO_BLK(vdev);
> -
> - if (s->dataplane) {
> - virtio_blk_data_plane_stop(s->dataplane);
> - }
> + AioContext *ctx;
>
> /*
> * This should cancel pending requests, but can't do nicely until there
> * are per-device request lists.
> */
> - blk_drain_all();
> + ctx = blk_get_aio_context(s->blk);
> + aio_context_acquire(ctx);
> + blk_drain(s->blk);
> +
> + if (s->dataplane) {
> + virtio_blk_data_plane_stop(s->dataplane);
> + }
> + aio_context_release(ctx);
> +
> blk_set_enable_write_cache(s->blk, s->original_wce);
> }
>From bdrv_drain_all()'s comment:
* Note that completion of an asynchronous I/O operation can trigger any
* number of other I/O operations on other devices---for example a coroutine
* can be arbitrarily complex and a constant flow of I/O can come until the
* coroutine is complete. Because of this, it is not possible to have a
* function to drain a single device's I/O queue.
>From bdrv_drain()'s comment:
* See the warning in bdrv_drain_all(). This function can only be called if
* you are sure nothing can generate I/O because you have op blockers
* installed.
blk_drain() and blk_drain_all() are trivial wrappers.
Ignorant questions:
* Why does blk_drain() suffice here?
* Is blk_drain() (created in PATCH 1) even a safe interface?
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [Qemu-devel] [PATCH v3 2/2] virtio-blk: Use blk_drain() to drain IO requests
2015-06-26 9:26 ` Markus Armbruster
@ 2015-06-26 14:00 ` Alexander Yarygin
2015-06-29 6:10 ` Markus Armbruster
0 siblings, 1 reply; 10+ messages in thread
From: Alexander Yarygin @ 2015-06-26 14:00 UTC (permalink / raw)
To: Markus Armbruster
Cc: Kevin Wolf, Fam Zheng, qemu-block, Michael S. Tsirkin,
Ekaterina Tumanova, qemu-devel, Christian Borntraeger,
Stefan Hajnoczi, Cornelia Huck, Paolo Bonzini
Markus Armbruster <armbru@redhat.com> writes:
> Just spotted this in my git-pull...
>
> Alexander Yarygin <yarygin@linux.vnet.ibm.com> writes:
>
>> Each call of the virtio_blk_reset() function calls blk_drain_all(),
>> which works for all existing BlockDriverStates, while draining only
>> one is needed.
>>
>> This patch replaces blk_drain_all() by blk_drain() in
>> virtio_blk_reset(). virtio_blk_data_plane_stop() should be called
>> after draining because it restores vblk->complete_request.
>>
>> Cc: "Michael S. Tsirkin" <mst@redhat.com>
>> Cc: Christian Borntraeger <borntraeger@de.ibm.com>
>> Cc: Cornelia Huck <cornelia.huck@de.ibm.com>
>> Cc: Kevin Wolf <kwolf@redhat.com>
>> Cc: Paolo Bonzini <pbonzini@redhat.com>
>> Cc: Stefan Hajnoczi <stefanha@redhat.com>
>> Signed-off-by: Alexander Yarygin <yarygin@linux.vnet.ibm.com>
>> ---
>> hw/block/virtio-blk.c | 15 ++++++++++-----
>> 1 file changed, 10 insertions(+), 5 deletions(-)
>>
>> diff --git a/hw/block/virtio-blk.c b/hw/block/virtio-blk.c
>> index e6afe97..d8a906f 100644
>> --- a/hw/block/virtio-blk.c
>> +++ b/hw/block/virtio-blk.c
>> @@ -651,16 +651,21 @@ static void virtio_blk_dma_restart_cb(void *opaque, int running,
>> static void virtio_blk_reset(VirtIODevice *vdev)
>> {
>> VirtIOBlock *s = VIRTIO_BLK(vdev);
>> -
>> - if (s->dataplane) {
>> - virtio_blk_data_plane_stop(s->dataplane);
>> - }
>> + AioContext *ctx;
>>
>> /*
>> * This should cancel pending requests, but can't do nicely until there
>> * are per-device request lists.
>> */
>> - blk_drain_all();
>> + ctx = blk_get_aio_context(s->blk);
>> + aio_context_acquire(ctx);
>> + blk_drain(s->blk);
>> +
>> + if (s->dataplane) {
>> + virtio_blk_data_plane_stop(s->dataplane);
>> + }
>> + aio_context_release(ctx);
>> +
>> blk_set_enable_write_cache(s->blk, s->original_wce);
>> }
>
> From bdrv_drain_all()'s comment:
>
> * Note that completion of an asynchronous I/O operation can trigger any
> * number of other I/O operations on other devices---for example a coroutine
> * can be arbitrarily complex and a constant flow of I/O can come until the
> * coroutine is complete. Because of this, it is not possible to have a
> * function to drain a single device's I/O queue.
>
> From bdrv_drain()'s comment:
>
> * See the warning in bdrv_drain_all(). This function can only be called if
> * you are sure nothing can generate I/O because you have op blockers
> * installed.
>
> blk_drain() and blk_drain_all() are trivial wrappers.
>
> Ignorant questions:
>
> * Why does blk_drain() suffice here?
>
> * Is blk_drain() (created in PATCH 1) even a safe interface?
* We want to drain requests from only one bdrv and blk_drain() can do
that.
* Ignorant answer: I was told that the bdrv_drain_all()'s comment is
obsolete and we can use bdrv_drain(). Here is a link to the old
thread: http://marc.info/?l=qemu-devel&m=143154211017926&w=2. Since I
don't see the full picture of this area yet, I'm just relying on other
people's opinion.
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [Qemu-devel] [PATCH v3 2/2] virtio-blk: Use blk_drain() to drain IO requests
2015-06-26 14:00 ` Alexander Yarygin
@ 2015-06-29 6:10 ` Markus Armbruster
2015-06-30 13:52 ` Stefan Hajnoczi
0 siblings, 1 reply; 10+ messages in thread
From: Markus Armbruster @ 2015-06-29 6:10 UTC (permalink / raw)
To: Alexander Yarygin
Cc: Kevin Wolf, Fam Zheng, qemu-block, Michael S. Tsirkin,
Ekaterina Tumanova, qemu-devel, Christian Borntraeger,
Stefan Hajnoczi, Cornelia Huck, Paolo Bonzini
Alexander Yarygin <yarygin@linux.vnet.ibm.com> writes:
> Markus Armbruster <armbru@redhat.com> writes:
>
>> Just spotted this in my git-pull...
>>
>> Alexander Yarygin <yarygin@linux.vnet.ibm.com> writes:
>>
>>> Each call of the virtio_blk_reset() function calls blk_drain_all(),
>>> which works for all existing BlockDriverStates, while draining only
>>> one is needed.
>>>
>>> This patch replaces blk_drain_all() by blk_drain() in
>>> virtio_blk_reset(). virtio_blk_data_plane_stop() should be called
>>> after draining because it restores vblk->complete_request.
>>>
>>> Cc: "Michael S. Tsirkin" <mst@redhat.com>
>>> Cc: Christian Borntraeger <borntraeger@de.ibm.com>
>>> Cc: Cornelia Huck <cornelia.huck@de.ibm.com>
>>> Cc: Kevin Wolf <kwolf@redhat.com>
>>> Cc: Paolo Bonzini <pbonzini@redhat.com>
>>> Cc: Stefan Hajnoczi <stefanha@redhat.com>
>>> Signed-off-by: Alexander Yarygin <yarygin@linux.vnet.ibm.com>
>>> ---
>>> hw/block/virtio-blk.c | 15 ++++++++++-----
>>> 1 file changed, 10 insertions(+), 5 deletions(-)
>>>
>>> diff --git a/hw/block/virtio-blk.c b/hw/block/virtio-blk.c
>>> index e6afe97..d8a906f 100644
>>> --- a/hw/block/virtio-blk.c
>>> +++ b/hw/block/virtio-blk.c
>>> @@ -651,16 +651,21 @@ static void virtio_blk_dma_restart_cb(void *opaque, int running,
>>> static void virtio_blk_reset(VirtIODevice *vdev)
>>> {
>>> VirtIOBlock *s = VIRTIO_BLK(vdev);
>>> -
>>> - if (s->dataplane) {
>>> - virtio_blk_data_plane_stop(s->dataplane);
>>> - }
>>> + AioContext *ctx;
>>>
>>> /*
>>> * This should cancel pending requests, but can't do nicely until there
>>> * are per-device request lists.
>>> */
>>> - blk_drain_all();
>>> + ctx = blk_get_aio_context(s->blk);
>>> + aio_context_acquire(ctx);
>>> + blk_drain(s->blk);
>>> +
>>> + if (s->dataplane) {
>>> + virtio_blk_data_plane_stop(s->dataplane);
>>> + }
>>> + aio_context_release(ctx);
>>> +
>>> blk_set_enable_write_cache(s->blk, s->original_wce);
>>> }
>>
>> From bdrv_drain_all()'s comment:
>>
>> * Note that completion of an asynchronous I/O operation can trigger any
>> * number of other I/O operations on other devices---for example a coroutine
>> * can be arbitrarily complex and a constant flow of I/O can come until the
>> * coroutine is complete. Because of this, it is not possible to have a
>> * function to drain a single device's I/O queue.
>>
>> From bdrv_drain()'s comment:
>>
>> * See the warning in bdrv_drain_all(). This function can only be called if
>> * you are sure nothing can generate I/O because you have op blockers
>> * installed.
>>
>> blk_drain() and blk_drain_all() are trivial wrappers.
>>
>> Ignorant questions:
>>
>> * Why does blk_drain() suffice here?
>>
>> * Is blk_drain() (created in PATCH 1) even a safe interface?
>
> * We want to drain requests from only one bdrv and blk_drain() can do
> that.
It's never been a question of not wanting to drain just one device, it's
been a problem of it not working. But point taken.
> * Ignorant answer: I was told that the bdrv_drain_all()'s comment is
> obsolete and we can use bdrv_drain(). Here is a link to the old
> thread: http://marc.info/?l=qemu-devel&m=143154211017926&w=2.
Kevin, Stefan, if the comment has become wrong, it needs to be redone.
Who's going to take care of it?
> Since I
> don't see the full picture of this area yet, I'm just relying on other
> people's opinion.
That's fair, we all do :)
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [Qemu-devel] [PATCH v3 2/2] virtio-blk: Use blk_drain() to drain IO requests
2015-06-29 6:10 ` Markus Armbruster
@ 2015-06-30 13:52 ` Stefan Hajnoczi
2015-07-01 15:52 ` Markus Armbruster
0 siblings, 1 reply; 10+ messages in thread
From: Stefan Hajnoczi @ 2015-06-30 13:52 UTC (permalink / raw)
To: Markus Armbruster
Cc: Kevin Wolf, Fam Zheng, qemu-block, Michael S. Tsirkin,
Ekaterina Tumanova, Alexander Yarygin, qemu-devel,
Christian Borntraeger, Cornelia Huck, Paolo Bonzini
[-- Attachment #1: Type: text/plain, Size: 928 bytes --]
On Mon, Jun 29, 2015 at 08:10:20AM +0200, Markus Armbruster wrote:
> Alexander Yarygin <yarygin@linux.vnet.ibm.com> writes:
> > Markus Armbruster <armbru@redhat.com> writes:
> > * Ignorant answer: I was told that the bdrv_drain_all()'s comment is
> > obsolete and we can use bdrv_drain(). Here is a link to the old
> > thread: http://marc.info/?l=qemu-devel&m=143154211017926&w=2.
>
> Kevin, Stefan, if the comment has become wrong, it needs to be redone.
> Who's going to take care of it?
I couldn't think of a scenario where this patch is unsafe.
The danger is that the I/O code path depends on something outside the
AioContext. In that case you block in aio_poll(aio_context, true)
forever without making progress. The thing the I/O request depends on
will never finish.
Code that accesses multiple BDSes puts them into the same AioContext, so
this should not be a problem in practice.
Stefan
[-- Attachment #2: Type: application/pgp-signature, Size: 473 bytes --]
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [Qemu-devel] [PATCH v3 2/2] virtio-blk: Use blk_drain() to drain IO requests
2015-06-30 13:52 ` Stefan Hajnoczi
@ 2015-07-01 15:52 ` Markus Armbruster
2015-07-02 8:21 ` [Qemu-devel] [Qemu-block] " Stefan Hajnoczi
0 siblings, 1 reply; 10+ messages in thread
From: Markus Armbruster @ 2015-07-01 15:52 UTC (permalink / raw)
To: Stefan Hajnoczi
Cc: Kevin Wolf, Fam Zheng, qemu-block, Alexander Yarygin,
Ekaterina Tumanova, Michael S. Tsirkin, qemu-devel,
Christian Borntraeger, Cornelia Huck, Paolo Bonzini
Stefan Hajnoczi <stefanha@redhat.com> writes:
> On Mon, Jun 29, 2015 at 08:10:20AM +0200, Markus Armbruster wrote:
>> Alexander Yarygin <yarygin@linux.vnet.ibm.com> writes:
>> > Markus Armbruster <armbru@redhat.com> writes:
>> > * Ignorant answer: I was told that the bdrv_drain_all()'s comment is
>> > obsolete and we can use bdrv_drain(). Here is a link to the old
>> > thread: http://marc.info/?l=qemu-devel&m=143154211017926&w=2.
>>
>> Kevin, Stefan, if the comment has become wrong, it needs to be redone.
>> Who's going to take care of it?
>
> I couldn't think of a scenario where this patch is unsafe.
>
> The danger is that the I/O code path depends on something outside the
> AioContext. In that case you block in aio_poll(aio_context, true)
> forever without making progress. The thing the I/O request depends on
> will never finish.
>
> Code that accesses multiple BDSes puts them into the same AioContext, so
> this should not be a problem in practice.
Stefan, could you rework the bdrv_drain()'s to spell out how it can be
used safely?
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [Qemu-devel] [Qemu-block] [PATCH v3 2/2] virtio-blk: Use blk_drain() to drain IO requests
2015-07-01 15:52 ` Markus Armbruster
@ 2015-07-02 8:21 ` Stefan Hajnoczi
0 siblings, 0 replies; 10+ messages in thread
From: Stefan Hajnoczi @ 2015-07-02 8:21 UTC (permalink / raw)
To: Markus Armbruster
Cc: Kevin Wolf, Fam Zheng, qemu block, Alexander Yarygin,
Ekaterina Tumanova, Michael S. Tsirkin, qemu-devel,
Christian Borntraeger, Stefan Hajnoczi, Cornelia Huck,
Paolo Bonzini
On Wed, Jul 1, 2015 at 4:52 PM, Markus Armbruster <armbru@redhat.com> wrote:
> Stefan Hajnoczi <stefanha@redhat.com> writes:
>
>> On Mon, Jun 29, 2015 at 08:10:20AM +0200, Markus Armbruster wrote:
>>> Alexander Yarygin <yarygin@linux.vnet.ibm.com> writes:
>>> > Markus Armbruster <armbru@redhat.com> writes:
>>> > * Ignorant answer: I was told that the bdrv_drain_all()'s comment is
>>> > obsolete and we can use bdrv_drain(). Here is a link to the old
>>> > thread: http://marc.info/?l=qemu-devel&m=143154211017926&w=2.
>>>
>>> Kevin, Stefan, if the comment has become wrong, it needs to be redone.
>>> Who's going to take care of it?
>>
>> I couldn't think of a scenario where this patch is unsafe.
>>
>> The danger is that the I/O code path depends on something outside the
>> AioContext. In that case you block in aio_poll(aio_context, true)
>> forever without making progress. The thing the I/O request depends on
>> will never finish.
>>
>> Code that accesses multiple BDSes puts them into the same AioContext, so
>> this should not be a problem in practice.
>
> Stefan, could you rework the bdrv_drain()'s to spell out how it can be
> used safely?
Good idea. I will send a patch.
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [Qemu-devel] [PATCH v3 0/2] Fix slow startup with many disks
2015-06-17 10:37 [Qemu-devel] [PATCH v3 0/2] Fix slow startup with many disks Alexander Yarygin
2015-06-17 10:37 ` [Qemu-devel] [PATCH v3 1/2] block-backend: Introduce blk_drain() Alexander Yarygin
2015-06-17 10:37 ` [Qemu-devel] [PATCH v3 2/2] virtio-blk: Use blk_drain() to drain IO requests Alexander Yarygin
@ 2015-06-18 15:53 ` Stefan Hajnoczi
2 siblings, 0 replies; 10+ messages in thread
From: Stefan Hajnoczi @ 2015-06-18 15:53 UTC (permalink / raw)
To: Alexander Yarygin
Cc: Kevin Wolf, qemu-block, Ekaterina Tumanova, qemu-devel,
Christian Borntraeger, Cornelia Huck, Paolo Bonzini
[-- Attachment #1: Type: text/plain, Size: 1646 bytes --]
On Wed, Jun 17, 2015 at 01:37:18PM +0300, Alexander Yarygin wrote:
> Changes in v3:
> - Added aio_context_acquire/aio_context_release around blk_drain() in
> "virtio-blk: Use blk_drain() to drain IO requests" + updated commit
> description
>
> Please update Cc: qemu-stable@ if it necessarily.
>
> Changes in v2:
> - Patch "block-backend: Introduce blk_drain() and replace blk_drain_all()"
> splitted to two.
> - blk_drain() moved to above virtio_blk_data_plane_stop().
>
> Previous thread:
> http://lists.gnu.org/archive/html/qemu-devel/2015-06/msg02786.html
>
> During reset the aio_poll() function is called at least amount_of_disks^2 times:
>
> for_each disk
> virtio_blk_reset()
> bdrv_drain_all()
> for_each disk
> aio_poll()
>
> For example, startup with 1000 disks takes over 13 minutes.
>
> Patches 1 and 2 removes inner loop by using bdrv_drain() instead
> of bdrv_drain_all(). bdrv_drain() works on one disk at time.
>
> Since bdrv_drain_all() is still called in other places, patch 3 optimizes
> it for cases, where there are more disks than iothreads.
>
> Thanks.
>
> Alexander Yarygin (2):
> block-backend: Introduce blk_drain()
> virtio-blk: Use blk_drain() to drain IO requests
>
> block/block-backend.c | 5 +++++
> hw/block/virtio-blk.c | 15 ++++++++++-----
> include/sysemu/block-backend.h | 1 +
> 3 files changed, 16 insertions(+), 5 deletions(-)
>
> --
> 1.9.1
>
Thanks, applied to my block tree:
https://github.com/stefanha/qemu/commits/block
Stefan
[-- Attachment #2: Type: application/pgp-signature, Size: 473 bytes --]
^ permalink raw reply [flat|nested] 10+ messages in thread