From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:49703) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1bLbsB-0007Va-8l for qemu-devel@nongnu.org; Fri, 08 Jul 2016 15:55:40 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1bLbs6-0003Tr-Tk for qemu-devel@nongnu.org; Fri, 08 Jul 2016 15:55:38 -0400 Received: from mail-lf0-x22a.google.com ([2a00:1450:4010:c07::22a]:35618) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1bLbs6-0003Tc-DV for qemu-devel@nongnu.org; Fri, 08 Jul 2016 15:55:34 -0400 Received: by mail-lf0-x22a.google.com with SMTP id l188so34196106lfe.2 for ; Fri, 08 Jul 2016 12:55:34 -0700 (PDT) References: <1467735496-16256-7-git-send-email-alex.bennee@linaro.org> <1467909880-18834-1-git-send-email-sergey.fedorov@linaro.org> <1467909880-18834-4-git-send-email-sergey.fedorov@linaro.org> <71295249.5198305.1467967205882.JavaMail.zimbra@redhat.com> <577F7F7D.4010207@gmail.com> <1711483720.5250530.1467975750519.JavaMail.zimbra@redhat.com> <577F9D69.3050600@gmail.com> <5677f192-7367-bf1a-353f-1b0678b1c8fb@redhat.com> From: Sergey Fedorov Message-ID: <57800533.2070303@gmail.com> Date: Fri, 8 Jul 2016 22:55:31 +0300 MIME-Version: 1.0 In-Reply-To: <5677f192-7367-bf1a-353f-1b0678b1c8fb@redhat.com> Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit Subject: Re: [Qemu-devel] [PATCH 3/3] tcg: Avoid bouncing tb_lock between tb_gen_code() and tb_add_jump() List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Paolo Bonzini Cc: Sergey Fedorov , qemu-devel@nongnu.org, =?UTF-8?Q?Alex_Benn=c3=a9e?= , patches@linaro.org, mttcg@greensocs.com, fred konrad , a rigo , cota@braap.org, bobby prani , rth@twiddle.net, mark burton , jan kiszka , peter maydell , claudio fontana , Peter Crosthwaite On 08/07/16 17:07, Paolo Bonzini wrote: > > On 08/07/2016 14:32, Sergey Fedorov wrote: >>>>>> I think we can do even better. One option is using a separate tiny lock >>>>>> to protect direct jump set/reset instead of tb_lock. >>>> If you have to use a separate tiny lock, you don't gain anything compared >>>> to the two critical sections, do you? >> If we have a separate lock for direct jump set/reset then we can do fast >> TB lookup + direct jump patching without taking tb_lock at all. How much >> this would reduce lock contention largely depends on the workload we use. > Yeah, it probably would be easy enough that it's hard to object to it > (unlike the other idea below, which I'm not very comfortable with, at > least without seeing patches). > > The main advantage would be that this tiny lock could be a spinlock > rather than a mutex. Well, the problem is more subtle than we thought: tb_find_fast() can race with tb_phys_invalidate(). The first tb_find_phys() out of the lock can return a TB which is being invalidated. Then a direct jump can be set up to this TB. It can happen after concurrent tb_phys_invalidate() resets all the direct jumps to the TB. Thus we can end up with a direct jump to an invalidated TB. Even extending tb_lock critical section wouldn't help if at least one tb_find_phys() is performed out of the lock. Kind regards, Sergey > >>> The one below is even more complicated. I'm all for simple lock-free >>> stacks (e.g. QSLIST_INSERT_HEAD_ATOMIC and QSLIST_MOVE_ATOMIC), but lock-free >>> lists are too much, especially if with the complicated first/next mechanism >>> of TCG's chained block lists. >> Direct jump handling code is pretty isolated and self-contained. It >> would require to back out of tb_remove_from_jmp_list() and sprinkle a >> couple of atomic_rcu_read()/atomic_rcu_set() with some comments, I >> think. Maybe it could be easier to justify looking at actual patches.