From mboxrd@z Thu Jan 1 00:00:00 1970 From: Asias He Subject: Re: [PATCH] kvm tools: Process virito blk requests in separate thread Date: Wed, 30 Nov 2011 15:21:58 +0800 Message-ID: <4ED5D996.1070408@gmail.com> References: <1322576888-7451-1-git-send-email-asias.hejun@gmail.com> <1322577409.7003.7.camel@lappy> Mime-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit Cc: Pekka Enberg , Cyrill Gorcunov , Ingo Molnar , kvm@vger.kernel.org To: Sasha Levin Return-path: Received: from mail-yx0-f174.google.com ([209.85.213.174]:40458 "EHLO mail-yx0-f174.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753865Ab1K3HXN (ORCPT ); Wed, 30 Nov 2011 02:23:13 -0500 Received: by yenl6 with SMTP id l6so199106yen.19 for ; Tue, 29 Nov 2011 23:23:13 -0800 (PST) In-Reply-To: <1322577409.7003.7.camel@lappy> Sender: kvm-owner@vger.kernel.org List-ID: On 11/29/2011 10:36 PM, Sasha Levin wrote: > On Tue, 2011-11-29 at 22:28 +0800, Asias He wrote: >> Currently, all blk requests are processed in notify_vq() which is in >> the context of ioeventfd thread: ioeventfd__thread(). The processing >> in notify_vq() may take a long time to complete. >> >> We should make notify_vq() return as soon as possible, since all devices >> are sharing the single ioeventfd thread. Otherwise, it will block other >> device's notify_vq() being called and starve other devices. >> >> In virtio net's notify_vq(), we simply signal the tx/rx handle thread >> and return. > > Why not use the threadpool? No. 1) In thread pool model, each job handling operation: thread_pool__do_job() takes about 6 or 7 mutex_{lock,unlock} ops. Most of the mutex are global (job_mutex) which are contented by the threads in the pool. It's fine for the non performance critical virtio devices, such as console, rng, etc. But it's not optimal for net and blk devices. 2) Using dedicated threads to handle blk requests opens the door for user to set different IO priority for the blk threads. 3) It also reduces the contentions between net and blk devices if they do not share the thead pool. --->> the thread pool lock flow <<--- thread_pool__do_job() { mutex_lock(&jobinfo->mutex); if (jobinfo->signalcount++ == 0) thread_pool__job_push_locked(job) { mutex_lock(&job_mutex); thread_pool__job_push(job); mutex_unlock(&job_mutex); } mutex_unlock(&jobinfo->mutex); mutex_lock(&job_muex); pthread_cond_signal(&job_cond); mutex_unlock(&job_mutex); } thread_pool__threadfunc() { for (;;) { mutex_lock(&job_mutex); pthread_cond_wait(&job_cond, &job_mutex); mutex_unlock(&job_mutex); if (curjob) thread_pool__handle_job(curjob); } } thread_pool__handle_job() { while (job) { job->callback(job->kvm, job->data); mutex_lock(&job->mutex); thread_pool__job_push_locked(job); mutex_unlock(&job->mutex); job = thread_pool__job_pop_locked(); } } -- Asias He