qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: Paolo Bonzini <pbonzini@redhat.com>
To: Peter Lieven <pl@kamp.de>, qemu-devel@nongnu.org
Cc: kwolf@redhat.com, famz@redhat.com, benoit@irqsave.net,
	ming.lei@canonical.com, armbru@redhat.com, mreitz@redhat.com,
	stefanha@redhat.com
Subject: Re: [Qemu-devel] [RFC PATCH 3/3] qemu-coroutine: use a ring per thread for the pool
Date: Thu, 27 Nov 2014 17:40:23 +0100	[thread overview]
Message-ID: <547753F7.2030709@redhat.com> (raw)
In-Reply-To: <1417084026-12307-4-git-send-email-pl@kamp.de>



On 27/11/2014 11:27, Peter Lieven wrote:
> +static __thread struct CoRoutinePool {
> +    Coroutine *ptrs[POOL_MAX_SIZE];
> +    unsigned int size;
> +    unsigned int nextfree;
> +} CoPool;
>  

The per-thread ring unfortunately didn't work well last time it was
tested.  Devices that do not use ioeventfd (not just the slow ones, even
decently performing ones like ahci, nvme or megasas) will create the
coroutine in the VCPU thread, and destroy it in the iothread.  The
result is that coroutines cannot be reused.

Can you check if this is still the case?

Paolo

  reply	other threads:[~2014-11-27 16:40 UTC|newest]

Thread overview: 27+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2014-11-27 10:27 [Qemu-devel] [RFC PATCH 0/3] qemu-coroutine: use a ring per thread for the pool Peter Lieven
2014-11-27 10:27 ` [Qemu-devel] [RFC PATCH 1/3] Revert "coroutine: make pool size dynamic" Peter Lieven
2014-11-28 12:42   ` Stefan Hajnoczi
2014-11-28 12:45     ` Peter Lieven
2014-11-27 10:27 ` [Qemu-devel] [RFC PATCH 2/3] block/block-backend.c: remove coroutine pool reservation Peter Lieven
2014-11-27 10:27 ` [Qemu-devel] [RFC PATCH 3/3] qemu-coroutine: use a ring per thread for the pool Peter Lieven
2014-11-27 16:40   ` Paolo Bonzini [this message]
2014-11-28  8:13     ` Peter Lieven
     [not found]       ` <54784E55.6060405@redhat.com>
2014-11-28 10:37         ` Peter Lieven
2014-11-28 11:14           ` Paolo Bonzini
2014-11-28 11:21             ` Peter Lieven
2014-11-28 11:23               ` Paolo Bonzini
2014-11-28 11:27                 ` Peter Lieven
2014-11-28 11:32                 ` Peter Lieven
2014-11-28 11:46                   ` Peter Lieven
2014-11-28 12:26                     ` Paolo Bonzini
2014-11-28 12:39                       ` Peter Lieven
2014-11-28 12:45                         ` Paolo Bonzini
2014-11-28 12:49                           ` Peter Lieven
2014-11-28 12:56                             ` Paolo Bonzini
2014-11-28 13:17                           ` Peter Lieven
2014-11-28 14:17                             ` Paolo Bonzini
2014-11-28 20:11                               ` Peter Lieven
2014-11-28 13:13                         ` Peter Lieven
2014-11-28 12:21                   ` Paolo Bonzini
2014-11-28 12:26                     ` Peter Lieven
2014-11-28 12:40   ` Stefan Hajnoczi

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=547753F7.2030709@redhat.com \
    --to=pbonzini@redhat.com \
    --cc=armbru@redhat.com \
    --cc=benoit@irqsave.net \
    --cc=famz@redhat.com \
    --cc=kwolf@redhat.com \
    --cc=ming.lei@canonical.com \
    --cc=mreitz@redhat.com \
    --cc=pl@kamp.de \
    --cc=qemu-devel@nongnu.org \
    --cc=stefanha@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).