From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:43430) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1YsVoF-0006jH-W8 for qemu-devel@nongnu.org; Wed, 13 May 2015 08:30:48 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1YsVoC-0001SQ-9g for qemu-devel@nongnu.org; Wed, 13 May 2015 08:30:47 -0400 Received: from greensocs.com ([193.104.36.180]:33855) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1YsVoB-0001Qa-Tu for qemu-devel@nongnu.org; Wed, 13 May 2015 08:30:44 -0400 Message-ID: <555343EE.3080302@greensocs.com> Date: Wed, 13 May 2015 14:30:38 +0200 From: Frederic Konrad MIME-Version: 1.0 References: <555243C8.30602@redhat.com> <55530E6A.6060202@redhat.com> <5553215E.4070705@redhat.com> In-Reply-To: <5553215E.4070705@redhat.com> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit Subject: Re: [Qemu-devel] when does a target frontend need to use gen_io_start()/gen_io_end() ? List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Paolo Bonzini , Peter Maydell Cc: mttcg@greensocs.com, QEMU Developers , Pavel Dovgaluk , Richard Henderson Hi, On 13/05/2015 12:03, Paolo Bonzini wrote: > > On 13/05/2015 11:41, Peter Maydell wrote: >>> For -icount and SMP, yes. I even posted a patch to that end once. >> I don't see why -icount and SMP need to be mutually exclusive. >> If we're round-robining between the SMP CPUs then they should >> all stay deterministic, I would have thought? > No, because the round-robin switches happen non-deterministically when > the I/O thread kicks the VCPU in qemu_mutex_lock_iothread. > > It gets worse with BQL-free TCG which lets you remove the kicks > altogether (and the round-robin disappears in favor of true > multithreading). Even you could keep the kicks, having both round-robin > and multi-threading would be extra complication in the code and cause of > bitrot. If you're talking of the kick_thread, kick_cpu, I think we should keep that. We need to be able to synchronize all VCPUs somehow when we want to do for example tb/tlb flush and tb_invalidate so we are sure no others VCPU will be executing tb. But then yes, we will probably have pain with icount and make it less deterministic than it is today (it's broken in my series though because of cpu_exec_nocache which does a tb_invalidate). Fred >>> You can get -icount and multi-threaded TCG (which for UP is simply TCG >>> with execution out of the BQL) together I think. For example you could >>> handle cpu->icount_decr.u16.low == 0 like cpu->halted, hanging the CPU >>> thread until QEMU_CLOCK_VIRTUAL timers have been processed. The I/O >>> thread would have to kick the CPU after processing QEMU_CLOCK_VIRTUAL >>> timers---not hard to do. >> Multithreaded TCG for a UP guest isn't very interesting though... > BQL-free TCG is interesting though, for two reasons: > > 1) maintainability: get rid of all the aforementioned "kick VCPU" stuff > in qemu_mutex_lock_iothread; > > 2) performance: allow handling of I/O events to run in parallel with the > VCPU, rather than the lockstep technique we have now. This improves > performance, so that for example you might get rid of the artificial > ratelimiting in ptimer.c. > > In case it wasn't clear, BQL-freedom is the main reason why I'm > interested in multithreaded TCG! :) > > Paolo