From: Stefan Hajnoczi <stefanha@redhat.com>
To: lma <lma@suse.de>
Cc: qemu-devel@nongnu.org, pbonzini@redhat.com, qemu-block@nongnu.org
Subject: Re: A question about how to calculate the "Maximum transfer length" in case of its absence in the Block Limits VPD device response from the hardware
Date: Fri, 18 Apr 2025 11:34:56 -0400 [thread overview]
Message-ID: <20250418153456.GA128796@fedora> (raw)
In-Reply-To: <20db3af2ece22f598b54a47ec350b466@suse.de>
[-- Attachment #1: Type: text/plain, Size: 4359 bytes --]
On Thu, Apr 17, 2025 at 07:27:26PM +0800, lma wrote:
> Hi all,
>
> In case of SCSI passthrough, If the Block Limits VPD device response is
> absent from hardware, QEMU handles it.
>
> There are several variables involved in this process as follows:
> * The bl.max_transfer
> * The bl.max_iov that is associated with IOV_MAX.
> * The bl.max_hw_iov that is associated with the max_segments sysfs setting
> for the relevant block device on the host.
> * The bl.max_hw_transfer that is associated with the BLKSECTGET ioctl, in
> other words related to the current max_sectors_kb sysfs setting of the
> relevant block device on the host.
>
> Then take the smallest value and return it as the result of "Maximum
> transfer length" after relevant calculation, See:
> static uint64_t calculate_max_transfer(SCSIDevice *s)
> {
> uint64_t max_transfer = blk_get_max_hw_transfer(s->conf.blk);
> uint32_t max_iov = blk_get_max_hw_iov(s->conf.blk);
>
> assert(max_transfer);
> max_transfer = MIN_NON_ZERO(max_transfer,
> max_iov * qemu_real_host_page_size());
>
> return max_transfer / s->blocksize;
> }
>
>
> However, due to the limitation of IOV_MAX, no matter how powerful the host
> scsi hardware is, the "Maximum transfer length" that qemu emulates in bl vpd
> page is capped at 8192 sectors in case of 4kb page size and 512 bytes
> logical block size.
> For example:
> host:~ # sg_vpd -p bl /dev/sda
> Block limits VPD page (SBC)
> ......
> Maximum transfer length: 0 blocks [not reported]
> ......
>
>
> host:~ # cat /sys/class/block/sda/queue/max_sectors_kb
> 16384
>
> host:~ # cat /sys/class/block/sda/queue/max_hw_sectors_kb
> 32767
>
> host:~ # cat /sys/class/block/sda/queue/max_segments
> 4096
>
>
> Expected:
> guest:~ # sg_vpd -p bl /dev/sda
> Block limits VPD page (SBC)
> ......
> Maximum transfer length: 0x8000
> ......
>
> guest:~ # cat /sys/class/block/sda/queue/max_sectors_kb
> 16384
>
> guest:~ # cat /sys/class/block/sda/queue/max_hw_sectors_kb
> 32767
>
>
> Actual:
> guest:~ # sg_vpd -p bl /dev/sda
> Block limits VPD page (SBC)
> ......
> Maximum transfer length: 0x2000
> ......
>
> guest:~ # cat /sys/class/block/sda/queue/max_sectors_kb
> 4096
>
> guest:~ # cat /sys/class/block/sda/queue/max_hw_sectors_kb
> 32767
>
>
> It seems the current design logic is not able to fully utilize the
> performance of the scsi hardware. I have two questions:
> 1. I'm curious that is it reasonable to drop the logic about IOV_MAX
> limitation, directly use the return value of BLKSECTGET as the maximum
> transfer length when QEMU emulates the block limit page of scsi vpd?
> If we doing so, we will have maximum transfer length in the guest that is
> consistent with the capabilities of the host hardware。
>
> 2. Besides, Assume I set a value(eg: 8192 in kb) to max_sectors_kb in guest
> which doesn't exceed the capabilities of the host hardware(eg: 16384 in kb)
> but exceeds the limit(eg: 4096 in kb) caused by IOV_MAX,
> Any risks in readv()/writev() of raw-posix?
Not a definitive answer, but just something to encourage discussion:
In theory IOV_MAX should not be factored into the Block Limits VPD page
Maximum Transfer Length field because there is already a HBA limit on
the maximum number of segments. For example, virtio-scsi has a seg_max
Configuration Space field that guest drivers honor independently of
Maximum Transfer Length.
However, I can imagine why MAX_IOV needs to be factored in:
1. The maximum number of segments might be hardcoded in guest drivers
for some SCSI HBAs and QEMU has no way of exposing MAX_IOV to the
guest in that case.
2. Guest physical RAM addresses translate to host virtual memory. That
means 1 segment as seen by the guest might actually require multiple
physical DMA segments on the host. A conservative calculation that
assumes the worst-case 1 iovec per 4 KB memory page prevents the
host maximum segments limit (note this is not the Maximum Transfer
Length limit!) from being exceeded.
So there seem to be at least two problems here. If you relax the
calculation there will be corner cases that break because the guest can
send too many segments.
Stefan
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]
next prev parent reply other threads:[~2025-04-18 15:39 UTC|newest]
Thread overview: 6+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-04-17 11:27 A question about how to calculate the "Maximum transfer length" in case of its absence in the Block Limits VPD device response from the hardware lma
2025-04-18 15:34 ` Stefan Hajnoczi [this message]
2025-04-23 9:47 ` lma
2025-04-23 13:24 ` Stefan Hajnoczi
[not found] ` <32c2072d6fc017786f4d6ef0dd681ae7@suse.de>
2025-04-24 14:51 ` Stefan Hajnoczi
2025-04-25 3:21 ` lma
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20250418153456.GA128796@fedora \
--to=stefanha@redhat.com \
--cc=lma@suse.de \
--cc=pbonzini@redhat.com \
--cc=qemu-block@nongnu.org \
--cc=qemu-devel@nongnu.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).