From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 49E76C77B61 for ; Tue, 25 Apr 2023 17:28:55 +0000 (UTC) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1prMS9-0006wn-Fu; Tue, 25 Apr 2023 13:27:45 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1prMS7-0006tE-G6 for qemu-devel@nongnu.org; Tue, 25 Apr 2023 13:27:43 -0400 Received: from us-smtp-delivery-124.mimecast.com ([170.10.129.124]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1prMS5-00037c-If for qemu-devel@nongnu.org; Tue, 25 Apr 2023 13:27:43 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1682443660; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=DTceCTcDIciTAW6Fo+6Plvs7yeu7soK0WgIi8nuOQs4=; b=U8vUg12DkgFSoCE00MwISPLDDBEuPkW+FFlbpOpySNttwo3P5Q+Ylyj4poTQyJQDcnkbEj BuoaB+iaGOIU6ob8w+7XWR4WWKWxCv4/7y0sphXk8+f0iqFcGy0h7uOQ2ORMyx3XFZ6qw5 M3gMiTVNp+eipl72/Ashd9Rbgo+XBsQ= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-499-bHDEV7dPMI2DaMxqNeeCrQ-1; Tue, 25 Apr 2023 13:27:37 -0400 X-MC-Unique: bHDEV7dPMI2DaMxqNeeCrQ-1 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.rdu2.redhat.com [10.11.54.2]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 833723C0F367; Tue, 25 Apr 2023 17:27:36 +0000 (UTC) Received: from localhost (unknown [10.39.193.242]) by smtp.corp.redhat.com (Postfix) with ESMTP id 637BF40C6E68; Tue, 25 Apr 2023 17:27:35 +0000 (UTC) From: Stefan Hajnoczi To: qemu-devel@nongnu.org Cc: =?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?= , Juan Quintela , Julia Suvorova , Kevin Wolf , xen-devel@lists.xenproject.org, eesposit@redhat.com, Richard Henderson , Fam Zheng , "Michael S. Tsirkin" , Coiby Xu , David Woodhouse , Marcel Apfelbaum , Peter Lieven , Paul Durrant , Stefan Hajnoczi , "Richard W.M. Jones" , qemu-block@nongnu.org, Stefano Garzarella , Anthony Perard , Stefan Weil , Xie Yongji , Paolo Bonzini , Aarushi Mehta , =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= , Eduardo Habkost , Stefano Stabellini , Hanna Reitz , Ronnie Sahlberg Subject: [PATCH v4 07/20] block/export: stop using is_external in vhost-user-blk server Date: Tue, 25 Apr 2023 13:27:03 -0400 Message-Id: <20230425172716.1033562-8-stefanha@redhat.com> In-Reply-To: <20230425172716.1033562-1-stefanha@redhat.com> References: <20230425172716.1033562-1-stefanha@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Scanned-By: MIMEDefang 3.1 on 10.11.54.2 Received-SPF: pass client-ip=170.10.129.124; envelope-from=stefanha@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -22 X-Spam_score: -2.3 X-Spam_bar: -- X-Spam_report: (-2.3 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.171, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_MSPIKE_H2=-0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org vhost-user activity must be suspended during bdrv_drained_begin/end(). This prevents new requests from interfering with whatever is happening in the drained section. Previously this was done using aio_set_fd_handler()'s is_external argument. In a multi-queue block layer world the aio_disable_external() API cannot be used since multiple AioContext may be processing I/O, not just one. Switch to BlockDevOps->drained_begin/end() callbacks. Signed-off-by: Stefan Hajnoczi --- block/export/vhost-user-blk-server.c | 43 ++++++++++++++-------------- util/vhost-user-server.c | 10 +++---- 2 files changed, 26 insertions(+), 27 deletions(-) diff --git a/block/export/vhost-user-blk-server.c b/block/export/vhost-user-blk-server.c index 092b86aae4..d20f69cd74 100644 --- a/block/export/vhost-user-blk-server.c +++ b/block/export/vhost-user-blk-server.c @@ -208,22 +208,6 @@ static const VuDevIface vu_blk_iface = { .process_msg = vu_blk_process_msg, }; -static void blk_aio_attached(AioContext *ctx, void *opaque) -{ - VuBlkExport *vexp = opaque; - - vexp->export.ctx = ctx; - vhost_user_server_attach_aio_context(&vexp->vu_server, ctx); -} - -static void blk_aio_detach(void *opaque) -{ - VuBlkExport *vexp = opaque; - - vhost_user_server_detach_aio_context(&vexp->vu_server); - vexp->export.ctx = NULL; -} - static void vu_blk_initialize_config(BlockDriverState *bs, struct virtio_blk_config *config, @@ -272,6 +256,25 @@ static void vu_blk_exp_resize(void *opaque) vu_config_change_msg(&vexp->vu_server.vu_dev); } +/* Called with vexp->export.ctx acquired */ +static void vu_blk_drained_begin(void *opaque) +{ + VuBlkExport *vexp = opaque; + + vhost_user_server_detach_aio_context(&vexp->vu_server); +} + +/* Called with vexp->export.blk AioContext acquired */ +static void vu_blk_drained_end(void *opaque) +{ + VuBlkExport *vexp = opaque; + + /* Refresh AioContext in case it changed */ + vexp->export.ctx = blk_get_aio_context(vexp->export.blk); + + vhost_user_server_attach_aio_context(&vexp->vu_server, vexp->export.ctx); +} + /* * Ensures that bdrv_drained_begin() waits until in-flight requests complete. * @@ -285,6 +288,8 @@ static bool vu_blk_drained_poll(void *opaque) } static const BlockDevOps vu_blk_dev_ops = { + .drained_begin = vu_blk_drained_begin, + .drained_end = vu_blk_drained_end, .drained_poll = vu_blk_drained_poll, .resize_cb = vu_blk_exp_resize, }; @@ -328,15 +333,11 @@ static int vu_blk_exp_create(BlockExport *exp, BlockExportOptions *opts, logical_block_size, num_queues); blk_set_dev_ops(exp->blk, &vu_blk_dev_ops, vexp); - blk_add_aio_context_notifier(exp->blk, blk_aio_attached, blk_aio_detach, - vexp); blk_set_dev_ops(exp->blk, &vu_blk_dev_ops, vexp); if (!vhost_user_server_start(&vexp->vu_server, vu_opts->addr, exp->ctx, num_queues, &vu_blk_iface, errp)) { - blk_remove_aio_context_notifier(exp->blk, blk_aio_attached, - blk_aio_detach, vexp); blk_set_dev_ops(exp->blk, NULL, NULL); g_free(vexp->handler.serial); return -EADDRNOTAVAIL; @@ -349,8 +350,6 @@ static void vu_blk_exp_delete(BlockExport *exp) { VuBlkExport *vexp = container_of(exp, VuBlkExport, export); - blk_remove_aio_context_notifier(exp->blk, blk_aio_attached, blk_aio_detach, - vexp); blk_set_dev_ops(exp->blk, NULL, NULL); g_free(vexp->handler.serial); } diff --git a/util/vhost-user-server.c b/util/vhost-user-server.c index 2e6b640050..332aea9306 100644 --- a/util/vhost-user-server.c +++ b/util/vhost-user-server.c @@ -278,7 +278,7 @@ set_watch(VuDev *vu_dev, int fd, int vu_evt, vu_fd_watch->fd = fd; vu_fd_watch->cb = cb; qemu_socket_set_nonblock(fd); - aio_set_fd_handler(server->ioc->ctx, fd, true, kick_handler, + aio_set_fd_handler(server->ioc->ctx, fd, false, kick_handler, NULL, NULL, NULL, vu_fd_watch); vu_fd_watch->vu_dev = vu_dev; vu_fd_watch->pvt = pvt; @@ -299,7 +299,7 @@ static void remove_watch(VuDev *vu_dev, int fd) if (!vu_fd_watch) { return; } - aio_set_fd_handler(server->ioc->ctx, fd, true, + aio_set_fd_handler(server->ioc->ctx, fd, false, NULL, NULL, NULL, NULL, NULL); QTAILQ_REMOVE(&server->vu_fd_watches, vu_fd_watch, next); @@ -362,7 +362,7 @@ void vhost_user_server_stop(VuServer *server) VuFdWatch *vu_fd_watch; QTAILQ_FOREACH(vu_fd_watch, &server->vu_fd_watches, next) { - aio_set_fd_handler(server->ctx, vu_fd_watch->fd, true, + aio_set_fd_handler(server->ctx, vu_fd_watch->fd, false, NULL, NULL, NULL, NULL, vu_fd_watch); } @@ -403,7 +403,7 @@ void vhost_user_server_attach_aio_context(VuServer *server, AioContext *ctx) qio_channel_attach_aio_context(server->ioc, ctx); QTAILQ_FOREACH(vu_fd_watch, &server->vu_fd_watches, next) { - aio_set_fd_handler(ctx, vu_fd_watch->fd, true, kick_handler, NULL, + aio_set_fd_handler(ctx, vu_fd_watch->fd, false, kick_handler, NULL, NULL, NULL, vu_fd_watch); } @@ -417,7 +417,7 @@ void vhost_user_server_detach_aio_context(VuServer *server) VuFdWatch *vu_fd_watch; QTAILQ_FOREACH(vu_fd_watch, &server->vu_fd_watches, next) { - aio_set_fd_handler(server->ctx, vu_fd_watch->fd, true, + aio_set_fd_handler(server->ctx, vu_fd_watch->fd, false, NULL, NULL, NULL, NULL, vu_fd_watch); } -- 2.39.2