From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751887AbdH1DEi (ORCPT ); Sun, 27 Aug 2017 23:04:38 -0400 Received: from mail.efficios.com ([167.114.142.141]:56340 "EHLO mail.efficios.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751726AbdH1DEh (ORCPT ); Sun, 27 Aug 2017 23:04:37 -0400 Date: Mon, 28 Aug 2017 03:05:46 +0000 (UTC) From: Mathieu Desnoyers To: Andy Lutomirski Cc: "Paul E. McKenney" , Peter Zijlstra , linux-kernel , Boqun Feng , Andrew Hunter , maged michael , gromer , Avi Kivity , Benjamin Herrenschmidt , Paul Mackerras , Michael Ellerman , Dave Watson , Andy Lutomirski , Will Deacon , Hans Boehm Message-ID: <1463521395.16945.1503889546934.JavaMail.zimbra@efficios.com> In-Reply-To: References: <20170827205035.25620-1-mathieu.desnoyers@efficios.com> Subject: Re: [PATCH v2] membarrier: provide register sync core cmd MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-Originating-IP: [167.114.142.141] X-Mailer: Zimbra 8.7.11_GA_1854 (ZimbraWebClient - FF52 (Linux)/8.7.11_GA_1854) Thread-Topic: membarrier: provide register sync core cmd Thread-Index: c/nE7c8I2vD/r0HpC1OOFIQx6176gg== Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org ----- On Aug 27, 2017, at 3:53 PM, Andy Lutomirski luto@amacapital.net wrote: >> On Aug 27, 2017, at 1:50 PM, Mathieu Desnoyers >> wrote: >> >> Add a new MEMBARRIER_CMD_REGISTER_SYNC_CORE command to the membarrier >> system call. It allows processes to register their intent to have their >> threads issue core serializing barriers in addition to memory barriers >> whenever a membarrier command is performed. >> > > Why is this stateful? That is, why not just have a new membarrier command to > sync every thread's icache? If we'd do it on every CPU icache, it would be as trivial as you say. The concern here is sending IPIs only to CPUs running threads that belong to the same process, so we don't disturb unrelated processes. If we could just grab each CPU's runqueue lock, it would be fairly simple to do. But we want to avoid hitting each runqueue with exclusive atomic access associated with grabbing the lock. (cache-line bouncing) So, the "private" membarrier command end up reading the rq->curr->mm pointer value for each runqueue, and compare them to its own current->mm value. However, this means that whenever we skip a CPU, we're not sending an IPI to that CPU. So we rely on the scheduler for providing the required full barrier either before storing to rq->curr, after user-space memory accesses performed by "prev", as well as after storing to rq->curr, before user-space memory accesses performed by "next". The IPI of the private membarrier can issue issue both smp_mb() and sync_core() (that's what my implementation does). However, having sys_membarrier issue core serializing barriers adds extra constraints on entry into the scheduler/resuming to user-space. It's not sufficient to order user-space memory accesses wrt storing to rq->curr; we also want to serialize the core execution. This is why I'm adding sync_core before the full barrier on entry, and sync_core after the full barrier on exit. Arguably, some architectures may not need the extra sync_core on exit (e.g. x86 has iret which implies core serialization), there are cases where it's not guaranteed (AFAIK sysexit), and it's rarely guaranteed on entry. So, we end up with the possibility of adding the core serialization unconditionally on entry and exit of scheduler. However, as my numbers below show, performance is slightly impacted in heavy benchmarks. Therefore, I propose to make processes register their intent to have the scheduler issue core serializing barriers on their behalf when it schedules them out/in. >> * Scheduler Overhead Benchmarks >> >> Intel(R) Xeon(R) CPU E5-2630 v3 @ 2.40GHz >> taskset 01 ./perf bench sched pipe -T >> Linux v4.13-rc6 >> >> Avg. usecs/op Std.Dev. usecs/op >> Before this change: 2.75 0.12 >> Non-registered processes: 2.73 0.08 >> Registered processes: 3.07 0.02 >> Thanks, Mathieu -- Mathieu Desnoyers EfficiOS Inc. http://www.efficios.com