From: Hannes Reinecke <hare@suse.de>
To: Mike Christie <michael.christie@oracle.com>,
Stefan Hajnoczi <stefanha@redhat.com>
Cc: martin.petersen@oracle.com, linux-scsi@vger.kernel.org,
james.bottomley@hansenpartnership.com,
virtualization@lists.linux.dev, mst@redhat.com,
pbonzini@redhat.com, eperezma@redhat.com
Subject: Re: [PATCH 0/4] scsi: Support devices that don't have a cmd_per_lun limit
Date: Thu, 23 Apr 2026 11:45:27 +0200 [thread overview]
Message-ID: <9ce439b8-4e56-4a8c-8ef9-d8d9e93ab77a@suse.de> (raw)
In-Reply-To: <603eee86-9914-4ac8-b937-a38922e69a45@oracle.com>
On 4/22/26 20:05, Mike Christie wrote:
> On 4/20/26 12:33 PM, Stefan Hajnoczi wrote:
>> On Fri, Apr 17, 2026 at 05:57:20PM -0500, Mike Christie wrote:
>>> The following patches were made over Linus's and Martin's 7.1 trees.
>>> They fix an issue where for virtio-scsi we export a lot of non-scsi
>>> devices but are getting throttled by the cmd_per_lun_limit too early.
>>> For example we export 1 or more NVMe or block devices and would like
>>> to just pass command to them in way where virtio-scsi's hw queue
>>> limits match the physical hardware. Or in some cases we are doing
>>> cgroup based throttling on the host side, and we don't want the guest
>>> to block IO when the host knows we have extra bandwidth.
>>>
>>> The patches add a new cmd_per_lun value so drivers can indicate
>>> when to avoid tracking queueing at the device wide level. They
>>> then rely on just the block layer hw queue limits. And the patches
>>> convert virtio-scsi. They also fix some can_queue related issues
>>> discovered while testing/reviewing.
>>
>> Hi Mike,
>> Is there a difference between setting cmd_per_lun to U32_MAX with your
>> patches versus setting cmd_per_lun to the virtqueue size without your
>> patches (this can already be done today without code changes in the
>> driver)?
>
> The problem today is that cmd_per_lun doesn't take into account the
> multiqueue queues (virtqueues in virtio) so we have a low limit of 1024
> commands total. On a 32-128 vCPU VM we can easily hit that as there's
> lots of IO submission threads spread over lots of those CPUs. CPUs are
> then mapped to block mq queues which are mapped to virtqueues so we are
> hitting them hard.
>
> That 1024 value comes from QEMU which limits virtqueue_size to 1024.
> We could increase that to 4096 or 32K or whatever. The problem is that
> we would then be wasting a lot of memory as we would be allocating lots
> of really large virtqueues that would go underutilized (we are submitting
> 10s of thousands of total IOs but not to just a single queue).
>
> So a possibly good balance between not having to use a magic number
> (U32_MAX) plus having to update the spec would be to:
>
> 1. Fix up scsi-ml and virtio-scsi so they allow cmd_per_lun to be
> greater than can_queue (virtqueue_size for virtio-scsi).
>
> 2. Increase the scsi-ml cap cmd_per_lun cap from 4096 to S16_MAX
> (scsi-ml uses a short for cmd_per_lun).
>
> The only drawback to this would be that for each scsi_device we track
> running IO with a sbitmap. For my cases, we don't need it, so it would
> be a waste of memory. For a S16_MAX worth of commands I think it would
> be 128K wasted so not too bad for us as we don't have lots of these
> types of high perf devices per VM.
>
Ideally I would kill cmd_per_lun.
This really is a poor man's fairness algorithm (sole purpose is to
avoid starvation with many luns), and we really should look at if
we cannot replace it with tagsets.
Cheers,
Hannes
--
Dr. Hannes Reinecke Kernel Storage Architect
hare@suse.de +49 911 74053 688
SUSE Software Solutions GmbH, Frankenstr. 146, 90461 Nürnberg
HRB 36809 (AG Nürnberg), GF: I. Totev, A. McDonald, W. Knoblich
next prev parent reply other threads:[~2026-04-23 9:45 UTC|newest]
Thread overview: 23+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-04-17 22:57 [PATCH 0/4] scsi: Support devices that don't have a cmd_per_lun limit Mike Christie
2026-04-17 22:57 ` [PATCH 1/4] scsi: Fix can_queue comments Mike Christie
2026-04-20 8:28 ` John Garry
2026-04-17 22:57 ` [PATCH 2/4] scsi: qedi: Fix command overqueueing Mike Christie
2026-04-20 16:45 ` Bart Van Assche
2026-04-20 17:47 ` Mike Christie
2026-04-20 18:02 ` Bart Van Assche
2026-04-20 18:48 ` Mike Christie
2026-04-17 22:57 ` [PATCH 3/4] scsi: Support scsi_devices without a device wide limit Mike Christie
2026-04-20 16:51 ` Bart Van Assche
2026-04-22 13:15 ` Hannes Reinecke
2026-04-22 18:06 ` Mike Christie
2026-04-23 10:02 ` John Garry
2026-04-23 10:32 ` Hannes Reinecke
2026-04-27 1:33 ` Martin K. Petersen
2026-04-17 22:57 ` [PATCH 4/4] virtio-scsi: " Mike Christie
2026-04-20 17:30 ` Stefan Hajnoczi
2026-04-20 17:37 ` Bart Van Assche
2026-04-20 17:33 ` [PATCH 0/4] scsi: Support devices that don't have a cmd_per_lun limit Stefan Hajnoczi
2026-04-22 18:05 ` Mike Christie
2026-04-23 9:45 ` Hannes Reinecke [this message]
2026-04-23 16:40 ` Bart Van Assche
2026-04-24 5:45 ` Hannes Reinecke
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=9ce439b8-4e56-4a8c-8ef9-d8d9e93ab77a@suse.de \
--to=hare@suse.de \
--cc=eperezma@redhat.com \
--cc=james.bottomley@hansenpartnership.com \
--cc=linux-scsi@vger.kernel.org \
--cc=martin.petersen@oracle.com \
--cc=michael.christie@oracle.com \
--cc=mst@redhat.com \
--cc=pbonzini@redhat.com \
--cc=stefanha@redhat.com \
--cc=virtualization@lists.linux.dev \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox