* [PATCH v3 0/1] Patch to adjust coroutine pool size adaptively
@ 2022-01-28 8:36 Hiroki Narukawa
2022-01-28 8:36 ` [PATCH v3 1/1] util: adjust coroutine pool size to virtio block queue Hiroki Narukawa
2022-03-17 19:10 ` [PATCH v3 0/1] Patch to adjust coroutine pool size adaptively Maxim Levitsky
0 siblings, 2 replies; 6+ messages in thread
From: Hiroki Narukawa @ 2022-01-28 8:36 UTC (permalink / raw)
To: qemu-devel
Cc: kwolf, Hiroki Narukawa, qemu-block, mst, f4bug, hreitz, stefanha,
aoiwa
Resending patch with decreasing coroutine pool size on device remove
We encountered random disk IO performance drop since qemu-5.0.0, and this patch fixes it.
Commit message in c740ad92 implied to adjust coroutine pool size adaptively, so I tried to implement this.
Changes from v2:
Decrease coroutine pool size on device remove
Changes from v1:
Use qatomic_read properly
Hiroki Narukawa (1):
util: adjust coroutine pool size to virtio block queue
hw/block/virtio-blk.c | 5 +++++
include/qemu/coroutine.h | 10 ++++++++++
util/qemu-coroutine.c | 20 ++++++++++++++++----
3 files changed, 31 insertions(+), 4 deletions(-)
--
2.17.1
^ permalink raw reply [flat|nested] 6+ messages in thread
* [PATCH v3 1/1] util: adjust coroutine pool size to virtio block queue
2022-01-28 8:36 [PATCH v3 0/1] Patch to adjust coroutine pool size adaptively Hiroki Narukawa
@ 2022-01-28 8:36 ` Hiroki Narukawa
2022-02-07 17:20 ` Stefan Hajnoczi
2022-02-14 11:54 ` Hiroki Narukawa
2022-03-17 19:10 ` [PATCH v3 0/1] Patch to adjust coroutine pool size adaptively Maxim Levitsky
1 sibling, 2 replies; 6+ messages in thread
From: Hiroki Narukawa @ 2022-01-28 8:36 UTC (permalink / raw)
To: qemu-devel
Cc: kwolf, Hiroki Narukawa, qemu-block, mst, f4bug, hreitz, stefanha,
aoiwa
Coroutine pool size was 64 from long ago, and the basis was organized in the commit message in c740ad92.
At that time, virtio-blk queue-size and num-queue were not configuable, and equivalent values were 128 and 1.
Coroutine pool size 64 was fine then.
Later queue-size and num-queue got configuable, and default values were increased.
Coroutine pool with size 64 exhausts frequently with random disk IO in new size, and slows down.
This commit adjusts coroutine pool size adaptively with new values.
This commit adds 64 by default, but now coroutine is not only for block devices,
and is not too much burdon comparing with new default.
pool size of 128 * vCPUs.
Signed-off-by: Hiroki Narukawa <hnarukaw@yahoo-corp.jp>
---
hw/block/virtio-blk.c | 5 +++++
include/qemu/coroutine.h | 10 ++++++++++
util/qemu-coroutine.c | 20 ++++++++++++++++----
3 files changed, 31 insertions(+), 4 deletions(-)
diff --git a/hw/block/virtio-blk.c b/hw/block/virtio-blk.c
index 82676cdd01..540c38f829 100644
--- a/hw/block/virtio-blk.c
+++ b/hw/block/virtio-blk.c
@@ -32,6 +32,7 @@
#include "hw/virtio/virtio-bus.h"
#include "migration/qemu-file-types.h"
#include "hw/virtio/virtio-access.h"
+#include "qemu/coroutine.h"
/* Config size before the discard support (hide associated config fields) */
#define VIRTIO_BLK_CFG_SIZE offsetof(struct virtio_blk_config, \
@@ -1214,6 +1215,8 @@ static void virtio_blk_device_realize(DeviceState *dev, Error **errp)
for (i = 0; i < conf->num_queues; i++) {
virtio_add_queue(vdev, conf->queue_size, virtio_blk_handle_output);
}
+ qemu_coroutine_increase_pool_batch_size(conf->num_queues * conf->queue_size
+ / 2);
virtio_blk_data_plane_create(vdev, conf, &s->dataplane, &err);
if (err != NULL) {
error_propagate(errp, err);
@@ -1250,6 +1253,8 @@ static void virtio_blk_device_unrealize(DeviceState *dev)
for (i = 0; i < conf->num_queues; i++) {
virtio_del_queue(vdev, i);
}
+ qemu_coroutine_decrease_pool_batch_size(conf->num_queues * conf->queue_size
+ / 2);
qemu_del_vm_change_state_handler(s->change);
blockdev_mark_auto_del(s->blk);
virtio_cleanup(vdev);
diff --git a/include/qemu/coroutine.h b/include/qemu/coroutine.h
index 4829ff373d..c828a95ee0 100644
--- a/include/qemu/coroutine.h
+++ b/include/qemu/coroutine.h
@@ -331,6 +331,16 @@ void qemu_co_sleep_wake(QemuCoSleep *w);
*/
void coroutine_fn yield_until_fd_readable(int fd);
+/**
+ * Increase coroutine pool size
+ */
+void qemu_coroutine_increase_pool_batch_size(unsigned int additional_pool_size);
+
+/**
+ * Devcrease coroutine pool size
+ */
+void qemu_coroutine_decrease_pool_batch_size(unsigned int additional_pool_size);
+
#include "qemu/lockable.h"
#endif /* QEMU_COROUTINE_H */
diff --git a/util/qemu-coroutine.c b/util/qemu-coroutine.c
index 38fb6d3084..c03b2422ff 100644
--- a/util/qemu-coroutine.c
+++ b/util/qemu-coroutine.c
@@ -20,12 +20,14 @@
#include "qemu/coroutine_int.h"
#include "block/aio.h"
+/** Initial batch size is 64, and is increased on demand */
enum {
- POOL_BATCH_SIZE = 64,
+ POOL_INITIAL_BATCH_SIZE = 64,
};
/** Free list to speed up creation */
static QSLIST_HEAD(, Coroutine) release_pool = QSLIST_HEAD_INITIALIZER(pool);
+static unsigned int pool_batch_size = POOL_INITIAL_BATCH_SIZE;
static unsigned int release_pool_size;
static __thread QSLIST_HEAD(, Coroutine) alloc_pool = QSLIST_HEAD_INITIALIZER(pool);
static __thread unsigned int alloc_pool_size;
@@ -49,7 +51,7 @@ Coroutine *qemu_coroutine_create(CoroutineEntry *entry, void *opaque)
if (CONFIG_COROUTINE_POOL) {
co = QSLIST_FIRST(&alloc_pool);
if (!co) {
- if (release_pool_size > POOL_BATCH_SIZE) {
+ if (release_pool_size > qatomic_read(&pool_batch_size)) {
/* Slow path; a good place to register the destructor, too. */
if (!coroutine_pool_cleanup_notifier.notify) {
coroutine_pool_cleanup_notifier.notify = coroutine_pool_cleanup;
@@ -86,12 +88,12 @@ static void coroutine_delete(Coroutine *co)
co->caller = NULL;
if (CONFIG_COROUTINE_POOL) {
- if (release_pool_size < POOL_BATCH_SIZE * 2) {
+ if (release_pool_size < qatomic_read(&pool_batch_size) * 2) {
QSLIST_INSERT_HEAD_ATOMIC(&release_pool, co, pool_next);
qatomic_inc(&release_pool_size);
return;
}
- if (alloc_pool_size < POOL_BATCH_SIZE) {
+ if (alloc_pool_size < qatomic_read(&pool_batch_size)) {
QSLIST_INSERT_HEAD(&alloc_pool, co, pool_next);
alloc_pool_size++;
return;
@@ -202,3 +204,13 @@ AioContext *coroutine_fn qemu_coroutine_get_aio_context(Coroutine *co)
{
return co->ctx;
}
+
+void qemu_coroutine_increase_pool_batch_size(unsigned int additional_pool_size)
+{
+ qatomic_add(&pool_batch_size, additional_pool_size);
+}
+
+void qemu_coroutine_decrease_pool_batch_size(unsigned int removing_pool_size)
+{
+ qatomic_sub(&pool_batch_size, removing_pool_size);
+}
--
2.17.1
^ permalink raw reply related [flat|nested] 6+ messages in thread
* Re: [PATCH v3 1/1] util: adjust coroutine pool size to virtio block queue
2022-01-28 8:36 ` [PATCH v3 1/1] util: adjust coroutine pool size to virtio block queue Hiroki Narukawa
@ 2022-02-07 17:20 ` Stefan Hajnoczi
2022-02-14 11:54 ` Hiroki Narukawa
1 sibling, 0 replies; 6+ messages in thread
From: Stefan Hajnoczi @ 2022-02-07 17:20 UTC (permalink / raw)
To: Hiroki Narukawa; +Cc: kwolf, qemu-block, mst, f4bug, qemu-devel, hreitz, aoiwa
[-- Attachment #1: Type: text/plain, Size: 1147 bytes --]
On Fri, Jan 28, 2022 at 05:36:16PM +0900, Hiroki Narukawa wrote:
> Coroutine pool size was 64 from long ago, and the basis was organized in the commit message in c740ad92.
>
> At that time, virtio-blk queue-size and num-queue were not configuable, and equivalent values were 128 and 1.
>
> Coroutine pool size 64 was fine then.
>
> Later queue-size and num-queue got configuable, and default values were increased.
>
> Coroutine pool with size 64 exhausts frequently with random disk IO in new size, and slows down.
>
> This commit adjusts coroutine pool size adaptively with new values.
>
> This commit adds 64 by default, but now coroutine is not only for block devices,
>
> and is not too much burdon comparing with new default.
>
> pool size of 128 * vCPUs.
>
> Signed-off-by: Hiroki Narukawa <hnarukaw@yahoo-corp.jp>
> ---
> hw/block/virtio-blk.c | 5 +++++
> include/qemu/coroutine.h | 10 ++++++++++
> util/qemu-coroutine.c | 20 ++++++++++++++++----
> 3 files changed, 31 insertions(+), 4 deletions(-)
Thanks, applied to my block tree:
https://gitlab.com/stefanha/qemu/commits/block
Stefan
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]
^ permalink raw reply [flat|nested] 6+ messages in thread
* RE: [PATCH v3 1/1] util: adjust coroutine pool size to virtio block queue
2022-01-28 8:36 ` [PATCH v3 1/1] util: adjust coroutine pool size to virtio block queue Hiroki Narukawa
2022-02-07 17:20 ` Stefan Hajnoczi
@ 2022-02-14 11:54 ` Hiroki Narukawa
2022-02-14 17:11 ` Stefan Hajnoczi
1 sibling, 1 reply; 6+ messages in thread
From: Hiroki Narukawa @ 2022-02-14 11:54 UTC (permalink / raw)
To: qemu-devel@nongnu.org
Cc: kwolf@redhat.com, qemu-block@nongnu.org, mst@redhat.com,
f4bug@amsat.org, hreitz@redhat.com, stefanha@redhat.com,
Akira Oiwa
> Coroutine pool size was 64 from long ago, and the basis was organized in the
> commit message in c740ad92.
Sorry, I noticed that commit ID mentioning here was incorrect.
The correct one is 4d68e86b.
https://gitlab.com/qemu-project/qemu/-/commit/4d68e86bb10159099da0798f74e7512955f15eec
I have resent this patch as v4 with exactly the same code as v3, just changing this commit message.
>
> At that time, virtio-blk queue-size and num-queue were not configuable, and
> equivalent values were 128 and 1.
>
> Coroutine pool size 64 was fine then.
>
> Later queue-size and num-queue got configuable, and default values were
> increased.
>
> Coroutine pool with size 64 exhausts frequently with random disk IO in new size,
> and slows down.
>
> This commit adjusts coroutine pool size adaptively with new values.
>
> This commit adds 64 by default, but now coroutine is not only for block devices,
>
> and is not too much burdon comparing with new default.
>
> pool size of 128 * vCPUs.
>
> Signed-off-by: Hiroki Narukawa <hnarukaw@yahoo-corp.jp>
> ---
> hw/block/virtio-blk.c | 5 +++++
> include/qemu/coroutine.h | 10 ++++++++++
> util/qemu-coroutine.c | 20 ++++++++++++++++----
> 3 files changed, 31 insertions(+), 4 deletions(-)
>
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH v3 1/1] util: adjust coroutine pool size to virtio block queue
2022-02-14 11:54 ` Hiroki Narukawa
@ 2022-02-14 17:11 ` Stefan Hajnoczi
0 siblings, 0 replies; 6+ messages in thread
From: Stefan Hajnoczi @ 2022-02-14 17:11 UTC (permalink / raw)
To: Hiroki Narukawa
Cc: kwolf@redhat.com, qemu-block@nongnu.org, mst@redhat.com,
f4bug@amsat.org, qemu-devel@nongnu.org, hreitz@redhat.com,
Akira Oiwa
[-- Attachment #1: Type: text/plain, Size: 505 bytes --]
On Mon, Feb 14, 2022 at 11:54:34AM +0000, Hiroki Narukawa wrote:
> > Coroutine pool size was 64 from long ago, and the basis was organized in the
> > commit message in c740ad92.
>
> Sorry, I noticed that commit ID mentioning here was incorrect.
> The correct one is 4d68e86b.
>
> https://gitlab.com/qemu-project/qemu/-/commit/4d68e86bb10159099da0798f74e7512955f15eec
>
> I have resent this patch as v4 with exactly the same code as v3, just changing this commit message.
Thanks!
Stefan
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH v3 0/1] Patch to adjust coroutine pool size adaptively
2022-01-28 8:36 [PATCH v3 0/1] Patch to adjust coroutine pool size adaptively Hiroki Narukawa
2022-01-28 8:36 ` [PATCH v3 1/1] util: adjust coroutine pool size to virtio block queue Hiroki Narukawa
@ 2022-03-17 19:10 ` Maxim Levitsky
1 sibling, 0 replies; 6+ messages in thread
From: Maxim Levitsky @ 2022-03-17 19:10 UTC (permalink / raw)
To: Hiroki Narukawa, qemu-devel
Cc: kwolf, qemu-block, mst, f4bug, hreitz, stefanha, aoiwa
On Fri, 2022-01-28 at 17:36 +0900, Hiroki Narukawa wrote:
> Resending patch with decreasing coroutine pool size on device remove
>
> We encountered random disk IO performance drop since qemu-5.0.0, and this patch fixes it.
>
> Commit message in c740ad92 implied to adjust coroutine pool size adaptively, so I tried to implement this.
>
> Changes from v2:
> Decrease coroutine pool size on device remove
>
> Changes from v1:
> Use qatomic_read properly
>
>
> Hiroki Narukawa (1):
> util: adjust coroutine pool size to virtio block queue
>
> hw/block/virtio-blk.c | 5 +++++
> include/qemu/coroutine.h | 10 ++++++++++
> util/qemu-coroutine.c | 20 ++++++++++++++++----
> 3 files changed, 31 insertions(+), 4 deletions(-)
>
I just bisected this to break my 32 bit qemu setup that I use for testing.
L1 is 32 bit VM with 16 GB of RAM (with PAE) with 16 vCPUs, and
L2 is 32 bit VM with 1.3 GB of RAM and 14 vCPUs (2 less)
Qemu runs out of memory, because new number of coroutines is quite high (14 * 256).
I understand that 32 bit qemu is very limited anyway, so I won't argue
against this patch. Just FYI.
As a workaround I reduced the virtio-blk queue-size to 16
and it seems to work again. I am only keeping this configuration to test
that it boots thus performance is not an issue.
Option to override the coroutine pool size would be ideal in this case IMHO though.
Best regards,
Maxim Levitsky
^ permalink raw reply [flat|nested] 6+ messages in thread
end of thread, other threads:[~2022-03-17 19:12 UTC | newest]
Thread overview: 6+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2022-01-28 8:36 [PATCH v3 0/1] Patch to adjust coroutine pool size adaptively Hiroki Narukawa
2022-01-28 8:36 ` [PATCH v3 1/1] util: adjust coroutine pool size to virtio block queue Hiroki Narukawa
2022-02-07 17:20 ` Stefan Hajnoczi
2022-02-14 11:54 ` Hiroki Narukawa
2022-02-14 17:11 ` Stefan Hajnoczi
2022-03-17 19:10 ` [PATCH v3 0/1] Patch to adjust coroutine pool size adaptively Maxim Levitsky
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).