* Re: [PATCH 12/16] vhost: support multiple worker threads
[not found] ` <da6f25b4-7a98-9294-a987-43d100625499@oracle.com>
@ 2020-10-08 20:26 ` Michael S. Tsirkin
0 siblings, 0 replies; 2+ messages in thread
From: Michael S. Tsirkin @ 2020-10-08 20:26 UTC (permalink / raw)
To: Mike Christie
Cc: martin.petersen, linux-scsi, virtualization, target-devel,
stefanha, pbonzini
On Thu, Oct 08, 2020 at 12:56:53PM -0500, Mike Christie wrote:
> On 10/7/20 3:54 PM, Mike Christie wrote:
> > This is a prep patch to support multiple vhost worker threads per vhost
> > dev. This patch converts the code that had assumed a single worker
> > thread by:
> >
> > 1. Moving worker related fields to a new struct vhost_worker.
> > 2. Converting vhost.c code to use the new struct and assume we will
> > have an array of workers.
> > 3. It also exports a helper function that will be used in the last
> > patch when vhost-scsi is converted to use this new functionality.
> >
>
> Oh yeah I also wanted to bring up this patch:
>
> https://www.spinics.net/lists/netdev/msg192548.html
>
> The problem with my multi-threading patches is that I was focused on
> the cgroup support parts and that lead to some gross decisions.
>
> 1. I kept the cgroup support, but as a result I do not have control
> over the threading affinity and making sure cmds are executed on a
> optimal CPU like the above patches do.
>
> When I drop the cgroup support and make sure threads are bound to
> specific CPUs and then make sure IO is run on the CPU it came in on
> then IOPs jumps from 600K to 800K for vhost-scsi.
>
> 2. I can possible create a lot of threads.
>
> So a couple open issues are:
>
> 1. Can we do a thread per cpu that is shared across all vhost devices?
> That would lead to dropping the cgroup vhost worker support.
>
> 2. Can we just use the kernel's workqueues then?
Problem is, we are talking about *lots* of CPU, IO etc and ATM cgroups
is how people expect to account for that overhead.
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
^ permalink raw reply [flat|nested] 2+ messages in thread
* Re: [PATCH 00/16 V2] vhost: fix scsi cmd handling and IOPs
[not found] <1602104101-5592-1-git-send-email-michael.christie@oracle.com>
[not found] ` <1602104101-5592-13-git-send-email-michael.christie@oracle.com>
@ 2020-10-23 15:46 ` Michael S. Tsirkin
1 sibling, 0 replies; 2+ messages in thread
From: Michael S. Tsirkin @ 2020-10-23 15:46 UTC (permalink / raw)
To: Mike Christie
Cc: martin.petersen, linux-scsi, virtualization, target-devel,
stefanha, pbonzini
On Wed, Oct 07, 2020 at 03:54:45PM -0500, Mike Christie wrote:
> The following patches were made over Michael's vhost branch here:
> https://git.kernel.org/pub/scm/linux/kernel/git/mst/vhost.git/log/?h=vhost
>
> The patches also apply to Linus's or Martin's trees if you apply
> https://patchwork.kernel.org/patch/11790681/
> which was merged into mst's tree already.
>
> The following patches are a follow up to this post:
> https://patchwork.kernel.org/cover/11790763/
> which originally was fixing how vhost-scsi handled cmds so we would
> not get IO errors when sending more than 256 cmds.
>
> In that patchset I needed to detect if a vq was in use and for this
> patch:
> https://patchwork.kernel.org/patch/11790685/
> it was suggested to add support for VHOST_RING_ENABLE. While doing
> that though I hit a couple problems:
>
> 1. The patches moved how vhost-scsi allocated cmds from per lio
> session to per vhost vq. To support both VHOST_RING_ENABLE and
> where userspace didn't support it, I would have to keep around the
> old per session/device cmd allocator/completion and then also maintain
> the new code. Or, I would still have to use this patch
> patchwork.kernel.org/cover/11790763/ for the compat case so there
> adding the new ioctl would not help much.
>
> 2. For vhost-scsi I also wanted to prevent where we allocate iovecs
> for 128 vqs even though we normally use a couple. To do this, I needed
> something similar to #1, but the problem is that the VHOST_RING_ENABLE
> call would come too late.
>
> To try and balance #1 and #2, these patches just allow vhost-scsi
> to setup a vq when userspace starts to config it. This allows the
> driver to only fully setup (we still waste some memory to support older
> setups but do not have to preallocate everything like before) what
> is used plus I do not need to maintain 2 code paths.
>
> Note that in this posting I am also including additional patches
> that create multiple vhost worker threads, because I wanted to see
> if people felt that maybe to support that and for this enablement
> issue we want a completely a new ioctl.
>
>
> V2:
> - fix use before set cpu var errors
> - drop vhost_vq_is_setup
> - include patches to do a worker thread per scsi IO vq
Stefan, Paolo, Jason any input?
--
MST
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
^ permalink raw reply [flat|nested] 2+ messages in thread
end of thread, other threads:[~2020-10-23 15:46 UTC | newest]
Thread overview: 2+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
[not found] <1602104101-5592-1-git-send-email-michael.christie@oracle.com>
[not found] ` <1602104101-5592-13-git-send-email-michael.christie@oracle.com>
[not found] ` <da6f25b4-7a98-9294-a987-43d100625499@oracle.com>
2020-10-08 20:26 ` [PATCH 12/16] vhost: support multiple worker threads Michael S. Tsirkin
2020-10-23 15:46 ` [PATCH 00/16 V2] vhost: fix scsi cmd handling and IOPs Michael S. Tsirkin
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).