qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: Stefan Hajnoczi <stefanha@redhat.com>
To: qemu-devel@nongnu.org
Cc: Eric Blake <eblake@redhat.com>,
	kwolf@redhat.com, Fam Zheng <fam@euphon.net>,
	Juan Quintela <quintela@redhat.com>,
	Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru>,
	Daniel Berrange <berrange@redhat.com>,
	Hanna Reitz <hreitz@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	qemu-block@nongnu.org, Leonardo Bras <leobras@redhat.com>,
	Coiby Xu <Coiby.Xu@gmail.com>, Peter Xu <peterx@redhat.com>,
	Stefan Hajnoczi <stefanha@redhat.com>
Subject: [PATCH 0/2] io: follow coroutine AioContext in qio_channel_yield()
Date: Wed, 23 Aug 2023 19:45:02 -0400	[thread overview]
Message-ID: <20230823234504.1387239-1-stefanha@redhat.com> (raw)

The ongoing QEMU multi-queue block layer effort makes it possible for multiple
threads to process I/O in parallel. The nbd block driver is not compatible with
the multi-queue block layer yet because QIOChannel cannot be used easily from
coroutines running in multiple threads. This series changes the QIOChannel API
to make that possible.

Stefan Hajnoczi (2):
  io: check there are no qio_channel_yield() coroutines during
    ->finalize()
  io: follow coroutine AioContext in qio_channel_yield()

 include/io/channel.h             |  34 ++++++++-
 include/qemu/vhost-user-server.h |   1 +
 block/nbd.c                      |  11 +--
 io/channel-command.c             |  13 +++-
 io/channel-file.c                |  18 ++++-
 io/channel-null.c                |   3 +-
 io/channel-socket.c              |  18 ++++-
 io/channel-tls.c                 |   6 +-
 io/channel.c                     | 124 ++++++++++++++++++++++---------
 migration/channel-block.c        |   3 +-
 nbd/client.c                     |   2 +-
 nbd/server.c                     |  14 +---
 scsi/qemu-pr-helper.c            |   4 +-
 util/vhost-user-server.c         |  27 +++++--
 14 files changed, 195 insertions(+), 83 deletions(-)

-- 
2.41.0



             reply	other threads:[~2023-08-23 23:46 UTC|newest]

Thread overview: 11+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-08-23 23:45 Stefan Hajnoczi [this message]
2023-08-23 23:45 ` [PATCH 1/2] io: check there are no qio_channel_yield() coroutines during ->finalize() Stefan Hajnoczi
2023-08-24 11:01   ` Daniel P. Berrangé
2023-08-24 18:18   ` Eric Blake
2023-08-23 23:45 ` [PATCH 2/2] io: follow coroutine AioContext in qio_channel_yield() Stefan Hajnoczi
2023-08-24 11:26   ` Daniel P. Berrangé
2023-08-24 17:07     ` Stefan Hajnoczi
2023-08-24 18:26     ` Stefan Hajnoczi
2023-08-25  8:09       ` Daniel P. Berrangé
2023-08-24 16:09   ` Fabiano Rosas
2023-08-24 17:29     ` Stefan Hajnoczi

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20230823234504.1387239-1-stefanha@redhat.com \
    --to=stefanha@redhat.com \
    --cc=Coiby.Xu@gmail.com \
    --cc=berrange@redhat.com \
    --cc=eblake@redhat.com \
    --cc=fam@euphon.net \
    --cc=hreitz@redhat.com \
    --cc=kwolf@redhat.com \
    --cc=leobras@redhat.com \
    --cc=pbonzini@redhat.com \
    --cc=peterx@redhat.com \
    --cc=qemu-block@nongnu.org \
    --cc=qemu-devel@nongnu.org \
    --cc=quintela@redhat.com \
    --cc=vsementsov@yandex-team.ru \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).