target-devel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Mike Christie <michael.christie@oracle.com>
To: Stefano Garzarella <sgarzare@redhat.com>
Cc: stefanha@redhat.com, linux-scsi@vger.kernel.org,
	target-devel@vger.kernel.org, mst@redhat.com,
	jasowang@redhat.com, pbonzini@redhat.com,
	virtualization@lists.linux-foundation.org
Subject: Re: [RFC PATCH 0/8] vhost: allow userspace to control vq cpu affinity
Date: Fri, 4 Dec 2020 11:10:51 -0600	[thread overview]
Message-ID: <40b22c4a-f9db-1389-aed1-b3d33678cfda@oracle.com> (raw)
In-Reply-To: <20201204160651.7wlselx4jm6k66mb@steredhat>

On 12/4/20 10:06 AM, Stefano Garzarella wrote:
> Hi Mike,
> 
> On Fri, Dec 04, 2020 at 01:56:25AM -0600, Mike Christie wrote:
>> These patches were made over mst's vhost branch.
>>
>> The following patches, made over mst's vhost branch, allow userspace
>> to set each vq's cpu affinity. Currently, with cgroups the worker thread
>> inherits the affinity settings, but we are at the mercy of the CPU
>> scheduler for where the vq's IO will be executed on. This can result in
>> the scheduler sometimes hammering a couple queues on the host instead of
>> spreading it out like how the guest's app might have intended if it was
>> mq aware.
>>
>> This version of the patches is not what you guys were talking about
>> initially like with the interface that was similar to nbd's old
>> (3.x kernel days) NBD_DO_IT ioctl where userspace calls down to the
>> kernel and we run from that context. These patches instead just
>> allow userspace to tell the kernel which CPU a vq should run on.
>> We then use the kernel's workqueue code to handle the thread
>> management.
> 
> I agree that reusing kernel's workqueue code would be a good strategy.
> 
> One concern is how easy it is to implement an adaptive polling strategy 
> using workqueues. From what I've seen, adding some polling of both 
> backend and virtqueue helps to eliminate interrupts and reduce latency.
> 
Would the polling you need to do be similar to the vhost net poll code 
like in vhost_net_busy_poll (different algorithm though)? But, we want 
to be able to poll multiple devs/vqs from the same CPU right? Something 
like:

retry:

for each poller on CPU N
	if poller has work
		driver->run work fn

if (poll limit hit)
	return
else
	cpu_relax();
goto retry:

?

If so, I had an idea for it. Let me send an additional patch on top of 
this set.

  reply	other threads:[~2020-12-04 17:13 UTC|newest]

Thread overview: 18+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-12-04  7:56 [RFC PATCH 0/8] vhost: allow userspace to control vq cpu affinity Mike Christie
2020-12-04  7:56 ` [RFC PATCH 1/8] vhost: remove work arg from vhost_work_flush Mike Christie
2020-12-04  7:56 ` [RFC PATCH 2/8] vhost-scsi: remove extra flushes Mike Christie
2020-12-04  7:56 ` [RFC PATCH 3/8] vhost poll: fix coding style Mike Christie
2020-12-04  7:56 ` [RFC PATCH 4/8] vhost: move msg_handler to new ops struct Mike Christie
2020-12-04  7:56 ` [RFC PATCH 5/8] vhost: allow userspace to bind vqs to CPUs Mike Christie
2020-12-04  8:09   ` Jason Wang
2020-12-04 16:32     ` Mike Christie
2020-12-07  4:27       ` Jason Wang
2020-12-07 18:31         ` Mike Christie
2020-12-08  2:30           ` Jason Wang
2020-12-04  7:56 ` [RFC PATCH 6/8] vhost-scsi: make SCSI cmd completion per vq Mike Christie
2020-12-04  7:56 ` [RFC PATCH 7/8] vhost, vhost-scsi: flush IO vqs then send TMF rsp Mike Christie
2020-12-04  7:56 ` [RFC PATCH 8/8] vhost-scsi: hook vhost-scsi into vring set cpu support Mike Christie
2020-12-04 16:06 ` [RFC PATCH 0/8] vhost: allow userspace to control vq cpu affinity Stefano Garzarella
2020-12-04 17:10   ` Mike Christie [this message]
2020-12-04 17:33     ` Mike Christie
2020-12-09 15:58       ` Stefano Garzarella

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=40b22c4a-f9db-1389-aed1-b3d33678cfda@oracle.com \
    --to=michael.christie@oracle.com \
    --cc=jasowang@redhat.com \
    --cc=linux-scsi@vger.kernel.org \
    --cc=mst@redhat.com \
    --cc=pbonzini@redhat.com \
    --cc=sgarzare@redhat.com \
    --cc=stefanha@redhat.com \
    --cc=target-devel@vger.kernel.org \
    --cc=virtualization@lists.linux-foundation.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).