qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
* [PATCH] block/nvme: use AIO_WAIT_WHILE_UNLOCKED()
@ 2023-04-04 11:20 Stefan Hajnoczi
  2023-04-04 11:46 ` Philippe Mathieu-Daudé
                   ` (2 more replies)
  0 siblings, 3 replies; 4+ messages in thread
From: Stefan Hajnoczi @ 2023-04-04 11:20 UTC (permalink / raw)
  To: qemu-devel
  Cc: Fam Zheng, Philippe Mathieu-Daudé, Kevin Wolf, qemu-block,
	Hanna Reitz, Stefan Hajnoczi

A few Admin Queue commands are submitted during nvme_file_open(). They
are synchronous since device initialization cannot continue until the
commands complete.

AIO_WAIT_WHILE() is currently used, but the block/nvme.c code actually
doesn't rely on the AioContext lock. Replace it with
AIO_WAIT_WHILE_UNLOCKED(NULL, condition). There is no change in behavior
and the dependency on the AioContext lock is eliminated.

This is a step towards removing the AioContext lock.

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
 block/nvme.c | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/block/nvme.c b/block/nvme.c
index 5b744c2bda..829b9c04db 100644
--- a/block/nvme.c
+++ b/block/nvme.c
@@ -512,7 +512,6 @@ static int nvme_admin_cmd_sync(BlockDriverState *bs, NvmeCmd *cmd)
 {
     BDRVNVMeState *s = bs->opaque;
     NVMeQueuePair *q = s->queues[INDEX_ADMIN];
-    AioContext *aio_context = bdrv_get_aio_context(bs);
     NVMeRequest *req;
     int ret = -EINPROGRESS;
     req = nvme_get_free_req_nowait(q);
@@ -521,7 +520,7 @@ static int nvme_admin_cmd_sync(BlockDriverState *bs, NvmeCmd *cmd)
     }
     nvme_submit_command(q, req, cmd, nvme_admin_cmd_sync_cb, &ret);
 
-    AIO_WAIT_WHILE(aio_context, ret == -EINPROGRESS);
+    AIO_WAIT_WHILE_UNLOCKED(NULL, ret == -EINPROGRESS);
     return ret;
 }
 
-- 
2.39.2



^ permalink raw reply related	[flat|nested] 4+ messages in thread

* Re: [PATCH] block/nvme: use AIO_WAIT_WHILE_UNLOCKED()
  2023-04-04 11:20 [PATCH] block/nvme: use AIO_WAIT_WHILE_UNLOCKED() Stefan Hajnoczi
@ 2023-04-04 11:46 ` Philippe Mathieu-Daudé
  2023-04-04 13:48 ` Paolo Bonzini
  2023-04-27 12:49 ` Kevin Wolf
  2 siblings, 0 replies; 4+ messages in thread
From: Philippe Mathieu-Daudé @ 2023-04-04 11:46 UTC (permalink / raw)
  To: Stefan Hajnoczi, qemu-devel
  Cc: Fam Zheng, Kevin Wolf, qemu-block, Hanna Reitz

On 4/4/23 13:20, Stefan Hajnoczi wrote:
> A few Admin Queue commands are submitted during nvme_file_open(). They
> are synchronous since device initialization cannot continue until the
> commands complete.
> 
> AIO_WAIT_WHILE() is currently used, but the block/nvme.c code actually
> doesn't rely on the AioContext lock. Replace it with
> AIO_WAIT_WHILE_UNLOCKED(NULL, condition). There is no change in behavior
> and the dependency on the AioContext lock is eliminated.
> 
> This is a step towards removing the AioContext lock.
> 
> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
> ---
>   block/nvme.c | 3 +--
>   1 file changed, 1 insertion(+), 2 deletions(-)

Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>



^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [PATCH] block/nvme: use AIO_WAIT_WHILE_UNLOCKED()
  2023-04-04 11:20 [PATCH] block/nvme: use AIO_WAIT_WHILE_UNLOCKED() Stefan Hajnoczi
  2023-04-04 11:46 ` Philippe Mathieu-Daudé
@ 2023-04-04 13:48 ` Paolo Bonzini
  2023-04-27 12:49 ` Kevin Wolf
  2 siblings, 0 replies; 4+ messages in thread
From: Paolo Bonzini @ 2023-04-04 13:48 UTC (permalink / raw)
  To: Stefan Hajnoczi, qemu-devel
  Cc: Fam Zheng, Philippe Mathieu-Daudé, Kevin Wolf, qemu-block,
	Hanna Reitz

On 4/4/23 13:20, Stefan Hajnoczi wrote:
> A few Admin Queue commands are submitted during nvme_file_open(). They
> are synchronous since device initialization cannot continue until the
> commands complete.
> 
> AIO_WAIT_WHILE() is currently used, but the block/nvme.c code actually
> doesn't rely on the AioContext lock. Replace it with
> AIO_WAIT_WHILE_UNLOCKED(NULL, condition). There is no change in behavior
> and the dependency on the AioContext lock is eliminated.
> 
> This is a step towards removing the AioContext lock.
> 
> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
> ---
>   block/nvme.c | 3 +--
>   1 file changed, 1 insertion(+), 2 deletions(-)
> 
> diff --git a/block/nvme.c b/block/nvme.c
> index 5b744c2bda..829b9c04db 100644
> --- a/block/nvme.c
> +++ b/block/nvme.c
> @@ -512,7 +512,6 @@ static int nvme_admin_cmd_sync(BlockDriverState *bs, NvmeCmd *cmd)
>   {
>       BDRVNVMeState *s = bs->opaque;
>       NVMeQueuePair *q = s->queues[INDEX_ADMIN];
> -    AioContext *aio_context = bdrv_get_aio_context(bs);
>       NVMeRequest *req;
>       int ret = -EINPROGRESS;
>       req = nvme_get_free_req_nowait(q);
> @@ -521,7 +520,7 @@ static int nvme_admin_cmd_sync(BlockDriverState *bs, NvmeCmd *cmd)
>       }
>       nvme_submit_command(q, req, cmd, nvme_admin_cmd_sync_cb, &ret);
>   
> -    AIO_WAIT_WHILE(aio_context, ret == -EINPROGRESS);
> +    AIO_WAIT_WHILE_UNLOCKED(NULL, ret == -EINPROGRESS);
>       return ret;
>   }
> 


Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>



^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [PATCH] block/nvme: use AIO_WAIT_WHILE_UNLOCKED()
  2023-04-04 11:20 [PATCH] block/nvme: use AIO_WAIT_WHILE_UNLOCKED() Stefan Hajnoczi
  2023-04-04 11:46 ` Philippe Mathieu-Daudé
  2023-04-04 13:48 ` Paolo Bonzini
@ 2023-04-27 12:49 ` Kevin Wolf
  2 siblings, 0 replies; 4+ messages in thread
From: Kevin Wolf @ 2023-04-27 12:49 UTC (permalink / raw)
  To: Stefan Hajnoczi
  Cc: qemu-devel, Fam Zheng, Philippe Mathieu-Daudé, qemu-block,
	Hanna Reitz

Am 04.04.2023 um 13:20 hat Stefan Hajnoczi geschrieben:
> A few Admin Queue commands are submitted during nvme_file_open(). They
> are synchronous since device initialization cannot continue until the
> commands complete.
> 
> AIO_WAIT_WHILE() is currently used, but the block/nvme.c code actually
> doesn't rely on the AioContext lock. Replace it with
> AIO_WAIT_WHILE_UNLOCKED(NULL, condition). There is no change in behavior
> and the dependency on the AioContext lock is eliminated.
> 
> This is a step towards removing the AioContext lock.
> 
> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
> ---
>  block/nvme.c | 3 +--
>  1 file changed, 1 insertion(+), 2 deletions(-)
> 
> diff --git a/block/nvme.c b/block/nvme.c
> index 5b744c2bda..829b9c04db 100644
> --- a/block/nvme.c
> +++ b/block/nvme.c
> @@ -512,7 +512,6 @@ static int nvme_admin_cmd_sync(BlockDriverState *bs, NvmeCmd *cmd)
>  {
>      BDRVNVMeState *s = bs->opaque;
>      NVMeQueuePair *q = s->queues[INDEX_ADMIN];
> -    AioContext *aio_context = bdrv_get_aio_context(bs);
>      NVMeRequest *req;
>      int ret = -EINPROGRESS;
>      req = nvme_get_free_req_nowait(q);
> @@ -521,7 +520,7 @@ static int nvme_admin_cmd_sync(BlockDriverState *bs, NvmeCmd *cmd)
>      }
>      nvme_submit_command(q, req, cmd, nvme_admin_cmd_sync_cb, &ret);
>  
> -    AIO_WAIT_WHILE(aio_context, ret == -EINPROGRESS);
> +    AIO_WAIT_WHILE_UNLOCKED(NULL, ret == -EINPROGRESS);
>      return ret;
>  }

Wait, do we hold the lock in this piece of code or don't we? Either the
old code was buggy (then the commit message should be explicit about
it), or the new one is.

It seems that in practice, it doesn't matter much because this function
is only called through .bdrv_file_open, which I think always run in the
main thread for a protocol driver. I believe that we generally don't
hold the AioContext lock there, so it's the old code that was wrong?

Kevin



^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2023-04-27 12:50 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2023-04-04 11:20 [PATCH] block/nvme: use AIO_WAIT_WHILE_UNLOCKED() Stefan Hajnoczi
2023-04-04 11:46 ` Philippe Mathieu-Daudé
2023-04-04 13:48 ` Paolo Bonzini
2023-04-27 12:49 ` Kevin Wolf

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).