From mboxrd@z Thu Jan 1 00:00:00 1970 From: Bandan Das Subject: Re: [RFC PATCH 0/4] cgroup aware workqueues Date: Thu, 31 Mar 2016 14:45:43 -0400 Message-ID: References: <1458339291-4093-1-git-send-email-bsd@redhat.com> <201603210758.u2L7wiY9003907@d06av07.portsmouth.uk.ibm.com> <20160330170419.GG7822@mtj.duckdns.org> <201603310617.u2V6HIkt008006@d06av12.portsmouth.uk.ibm.com> <20160331171435.GD24661@htj.duckdns.org> Mime-Version: 1.0 Content-Type: text/plain Cc: Michael Rapoport , linux-kernel@vger.kernel.org, kvm@vger.kernel.org, mst@redhat.com, jiangshanlai@gmail.com To: Tejun Heo Return-path: In-Reply-To: <20160331171435.GD24661@htj.duckdns.org> (Tejun Heo's message of "Thu, 31 Mar 2016 13:14:35 -0400") Sender: linux-kernel-owner@vger.kernel.org List-Id: kvm.vger.kernel.org Tejun Heo writes: > Hello, Michael. > > On Thu, Mar 31, 2016 at 08:17:13AM +0200, Michael Rapoport wrote: >> > There really shouldn't be any difference when using unbound >> > workqueues. workqueue becomes a convenience thing which manages >> > worker pools and there shouldn't be any difference between workqueue >> > workers and kthreads in terms of behavior. >> >> I agree that there really shouldn't be any performance difference, but the >> tests I've run show otherwise. I have no idea why and I hadn't time yet to >> investigate it. > > I'd be happy to help digging into what's going on. If kvm wants full > control over the worker thread, kvm can use workqueue as a pure > threadpool. Schedule a work item to grab a worker thread with the > matching attributes and keep using it as it'd a kthread. While that > wouldn't be able to take advantage of work item flushing and so on, > it'd still be a simpler way to manage worker threads and the extra > stuff like cgroup membership handling doesn't have to be duplicated. > >> > > opportunity for optimization, at least for some workloads... >> > >> > What sort of optimizations are we talking about? >> >> Well, if we take Evlis (1) as for the theoretical base, there could be >> benefit of doing I/O scheduling inside the vhost. > > Yeah, if that actually is beneficial, take full control of the > kworker thread. Well, even if it actually is beneficial (which I am sure it is), it seems a little impractical to block current improvements based on a future prospect that (as far as I know), no one is working on ? There have been discussions about this in the past and iirc, most people agree about not going the byos* route. But I am still all for such a proposal and if it's good/clean enough, I think we can definitely tear down what we have and throw it away! The I/O scheduling part is intrusive enough that even the current code base has to be changed quite a bit. *byos = bring your own scheduling ;) > Thanks.