virtualization.lists.linux-foundation.org archive mirror
 help / color / mirror / Atom feed
* Re: [PATCH 1/2] virtio-blk: set req->state to MQ_RQ_COMPLETE after polling I/O is finished
       [not found] <20221206141125.93055-1-suwan.kim027@gmail.com>
@ 2022-12-07 21:47 ` Stefan Hajnoczi
  2022-12-08 16:48   ` Jens Axboe
       [not found] ` <20221206141125.93055-2-suwan.kim027@gmail.com>
  1 sibling, 1 reply; 4+ messages in thread
From: Stefan Hajnoczi @ 2022-12-07 21:47 UTC (permalink / raw)
  To: axboe; +Cc: linux-block, mst, virtualization, hch, pbonzini, Suwan Kim


[-- Attachment #1.1: Type: text/plain, Size: 1734 bytes --]

On Tue, Dec 06, 2022 at 11:11:24PM +0900, Suwan Kim wrote:
> Driver should set req->state to MQ_RQ_COMPLETE after it finishes to process
> req. But virtio-blk doesn't set MQ_RQ_COMPLETE after virtblk_poll() handles
> req and req->state still remains MQ_RQ_IN_FLIGHT. Fortunately so far there
> is no issue about it because blk_mq_end_request_batch() sets req->state to
> MQ_RQ_IDLE. This patch properly sets req->state after polling I/O is finished.
> 
> Fixes: 4e0400525691 ("virtio-blk: support polling I/O")
> Signed-off-by: Suwan Kim <suwan.kim027@gmail.com>
> ---
>  drivers/block/virtio_blk.c | 1 +
>  1 file changed, 1 insertion(+)
> 
> diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c
> index 19da5defd734..cf64d256787e 100644
> --- a/drivers/block/virtio_blk.c
> +++ b/drivers/block/virtio_blk.c
> @@ -839,6 +839,7 @@ static void virtblk_complete_batch(struct io_comp_batch *iob)
>  	rq_list_for_each(&iob->req_list, req) {
>  		virtblk_unmap_data(req, blk_mq_rq_to_pdu(req));
>  		virtblk_cleanup_cmd(req);
> +		blk_mq_set_request_complete(req);
>  	}
>  	blk_mq_end_request_batch(iob);
>  }

The doc comment for blk_mq_set_request_complete() mentions this being
used in ->queue_rq(), but that's not the case here. Does the doc comment
need to be updated if we're using the function in a different way?

I'm not familiar enough with the Linux block APIs, but this feels weird
to me. Shouldn't blk_mq_end_request_batch(iob) take care of this for us?
Why does it set the state to IDLE instead of COMPLETE?

I think Jens can confirm whether we really want all drivers that use
polling and io_comp_batch to manually call
blk_mq_set_request_complete().

Thanks,
Stefan

[-- Attachment #1.2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

[-- Attachment #2: Type: text/plain, Size: 183 bytes --]

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [PATCH 2/2] virtio-blk: support completion batching for the IRQ path
       [not found] ` <20221206141125.93055-2-suwan.kim027@gmail.com>
@ 2022-12-07 22:05   ` Stefan Hajnoczi
  0 siblings, 0 replies; 4+ messages in thread
From: Stefan Hajnoczi @ 2022-12-07 22:05 UTC (permalink / raw)
  To: Suwan Kim; +Cc: axboe, linux-block, mst, virtualization, hch, pbonzini


[-- Attachment #1.1: Type: text/plain, Size: 3594 bytes --]

On Tue, Dec 06, 2022 at 11:11:25PM +0900, Suwan Kim wrote:
> This patch adds completion batching to the IRQ path. It reuses batch
> completion code of virtblk_poll(). It collects requests to io_comp_batch
> and processes them all at once. It can boost up the performance by 2%.
> 
> To validate the performance improvement and stabilty, I did fio test with
> 4 vCPU VM and 12 vCPU VM respectively. Both VMs have 8GB ram and the same
> number of HW queues as vCPU.
> The fio cammad is as follows and I ran the fio 5 times and got IOPS average.
> (io_uring, randread, direct=1, bs=512, iodepth=64 numjobs=2,4)
> 
> Test result shows about 2% improvement.
> 
>            4 vcpu VM       |   numjobs=2   |   numjobs=4
>       -----------------------------------------------------------
>         fio without patch  |  367.2K IOPS  |   397.6K IOPS
>       -----------------------------------------------------------
>         fio with patch     |  372.8K IOPS  |   407.7K IOPS
> 
>            12 vcpu VM      |   numjobs=2   |   numjobs=4
>       -----------------------------------------------------------
>         fio without patch  |  363.6K IOPS  |   374.8K IOPS
>       -----------------------------------------------------------
>         fio with patch     |  373.8K IOPS  |   385.3K IOPS
> 
> Signed-off-by: Suwan Kim <suwan.kim027@gmail.com>
> ---
>  drivers/block/virtio_blk.c | 38 +++++++++++++++++++++++---------------
>  1 file changed, 23 insertions(+), 15 deletions(-)

Cool, thanks for doing this!

> diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c
> index cf64d256787e..48fcf745f007 100644
> --- a/drivers/block/virtio_blk.c
> +++ b/drivers/block/virtio_blk.c
> @@ -272,6 +272,18 @@ static inline void virtblk_request_done(struct request *req)
>  	blk_mq_end_request(req, virtblk_result(vbr));
>  }
>  
> +static void virtblk_complete_batch(struct io_comp_batch *iob)
> +{
> +	struct request *req;
> +
> +	rq_list_for_each(&iob->req_list, req) {
> +		virtblk_unmap_data(req, blk_mq_rq_to_pdu(req));
> +		virtblk_cleanup_cmd(req);
> +		blk_mq_set_request_complete(req);
> +	}
> +	blk_mq_end_request_batch(iob);
> +}
> +
>  static void virtblk_done(struct virtqueue *vq)
>  {
>  	struct virtio_blk *vblk = vq->vdev->priv;
> @@ -280,6 +292,7 @@ static void virtblk_done(struct virtqueue *vq)
>  	struct virtblk_req *vbr;
>  	unsigned long flags;
>  	unsigned int len;
> +	DEFINE_IO_COMP_BATCH(iob);
>  
>  	spin_lock_irqsave(&vblk->vqs[qid].lock, flags);
>  	do {
> @@ -287,7 +300,9 @@ static void virtblk_done(struct virtqueue *vq)
>  		while ((vbr = virtqueue_get_buf(vblk->vqs[qid].vq, &len)) != NULL) {
>  			struct request *req = blk_mq_rq_from_pdu(vbr);
>  
> -			if (likely(!blk_should_fake_timeout(req->q)))
> +			if (likely(!blk_should_fake_timeout(req->q)) &&
> +				!blk_mq_add_to_batch(req, &iob, vbr->status,
> +							virtblk_complete_batch))
>  				blk_mq_complete_request(req);
>  			req_done = true;
>  		}
> @@ -295,9 +310,14 @@ static void virtblk_done(struct virtqueue *vq)
>  			break;
>  	} while (!virtqueue_enable_cb(vq));
>  
> -	/* In case queue is stopped waiting for more buffers. */
> -	if (req_done)
> +	if (req_done) {
> +		if (!rq_list_empty(iob.req_list))
> +			virtblk_complete_batch(&iob);

A little optimization to avoid the indirect call: iob.complete(&iob) :).
Not sure if it's good style to do that but it works in this case because
we know it can only be virtblk_complete_batch().

Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>

[-- Attachment #1.2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

[-- Attachment #2: Type: text/plain, Size: 183 bytes --]

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [PATCH 1/2] virtio-blk: set req->state to MQ_RQ_COMPLETE after polling I/O is finished
  2022-12-07 21:47 ` [PATCH 1/2] virtio-blk: set req->state to MQ_RQ_COMPLETE after polling I/O is finished Stefan Hajnoczi
@ 2022-12-08 16:48   ` Jens Axboe
  2022-12-12  6:50     ` Christoph Hellwig
  0 siblings, 1 reply; 4+ messages in thread
From: Jens Axboe @ 2022-12-08 16:48 UTC (permalink / raw)
  To: Stefan Hajnoczi
  Cc: linux-block, mst, virtualization, hch, pbonzini, Suwan Kim

On 12/7/22 2:47 PM, Stefan Hajnoczi wrote:
> On Tue, Dec 06, 2022 at 11:11:24PM +0900, Suwan Kim wrote:
>> Driver should set req->state to MQ_RQ_COMPLETE after it finishes to process
>> req. But virtio-blk doesn't set MQ_RQ_COMPLETE after virtblk_poll() handles
>> req and req->state still remains MQ_RQ_IN_FLIGHT. Fortunately so far there
>> is no issue about it because blk_mq_end_request_batch() sets req->state to
>> MQ_RQ_IDLE. This patch properly sets req->state after polling I/O is finished.
>>
>> Fixes: 4e0400525691 ("virtio-blk: support polling I/O")
>> Signed-off-by: Suwan Kim <suwan.kim027@gmail.com>
>> ---
>>  drivers/block/virtio_blk.c | 1 +
>>  1 file changed, 1 insertion(+)
>>
>> diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c
>> index 19da5defd734..cf64d256787e 100644
>> --- a/drivers/block/virtio_blk.c
>> +++ b/drivers/block/virtio_blk.c
>> @@ -839,6 +839,7 @@ static void virtblk_complete_batch(struct io_comp_batch *iob)
>>  	rq_list_for_each(&iob->req_list, req) {
>>  		virtblk_unmap_data(req, blk_mq_rq_to_pdu(req));
>>  		virtblk_cleanup_cmd(req);
>> +		blk_mq_set_request_complete(req);
>>  	}
>>  	blk_mq_end_request_batch(iob);
>>  }
> 
> The doc comment for blk_mq_set_request_complete() mentions this being
> used in ->queue_rq(), but that's not the case here. Does the doc comment
> need to be updated if we're using the function in a different way?

Looks like it's a bit outdated...

> I'm not familiar enough with the Linux block APIs, but this feels weird
> to me. Shouldn't blk_mq_end_request_batch(iob) take care of this for us?
> Why does it set the state to IDLE instead of COMPLETE?
> 
> I think Jens can confirm whether we really want all drivers that use
> polling and io_comp_batch to manually call
> blk_mq_set_request_complete().

Should not be a need to call blk_mq_set_request_complete() directly in
the driver for this.

-- 
Jens Axboe


_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [PATCH 1/2] virtio-blk: set req->state to MQ_RQ_COMPLETE after polling I/O is finished
  2022-12-08 16:48   ` Jens Axboe
@ 2022-12-12  6:50     ` Christoph Hellwig
  0 siblings, 0 replies; 4+ messages in thread
From: Christoph Hellwig @ 2022-12-12  6:50 UTC (permalink / raw)
  To: Jens Axboe
  Cc: linux-block, mst, virtualization, hch, Stefan Hajnoczi, pbonzini,
	Suwan Kim

On Thu, Dec 08, 2022 at 09:48:23AM -0700, Jens Axboe wrote:
> > The doc comment for blk_mq_set_request_complete() mentions this being
> > used in ->queue_rq(), but that's not the case here. Does the doc comment
> > need to be updated if we're using the function in a different way?
> 
> Looks like it's a bit outdated...

I think the comment is still entirely correct.

> 
> > I'm not familiar enough with the Linux block APIs, but this feels weird
> > to me. Shouldn't blk_mq_end_request_batch(iob) take care of this for us?
> > Why does it set the state to IDLE instead of COMPLETE?
> > 
> > I think Jens can confirm whether we really want all drivers that use
> > polling and io_comp_batch to manually call
> > blk_mq_set_request_complete().
> 
> Should not be a need to call blk_mq_set_request_complete() directly in
> the driver for this.

Exactly.  Polling or not, drivers should go through the normal completion
interface, that is blk_mq_complete_request or the lower-level options
blk_mq_complete_request_remote and blk_mq_complete_request_direct.
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2022-12-12  6:50 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
     [not found] <20221206141125.93055-1-suwan.kim027@gmail.com>
2022-12-07 21:47 ` [PATCH 1/2] virtio-blk: set req->state to MQ_RQ_COMPLETE after polling I/O is finished Stefan Hajnoczi
2022-12-08 16:48   ` Jens Axboe
2022-12-12  6:50     ` Christoph Hellwig
     [not found] ` <20221206141125.93055-2-suwan.kim027@gmail.com>
2022-12-07 22:05   ` [PATCH 2/2] virtio-blk: support completion batching for the IRQ path Stefan Hajnoczi

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).