From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752207AbbIORim (ORCPT ); Tue, 15 Sep 2015 13:38:42 -0400 Received: from e35.co.us.ibm.com ([32.97.110.153]:34306 "EHLO e35.co.us.ibm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751807AbbIORik (ORCPT ); Tue, 15 Sep 2015 13:38:40 -0400 X-Helo: d03dlp02.boulder.ibm.com X-MailFrom: paulmck@linux.vnet.ibm.com X-RcptTo: linux-kernel@vger.kernel.org Date: Tue, 15 Sep 2015 10:38:36 -0700 From: "Paul E. McKenney" To: Paolo Bonzini Cc: Christian Borntraeger , Peter Zijlstra , Tejun Heo , Ingo Molnar , "linux-kernel@vger.kernel.org >> Linux Kernel Mailing List" , KVM list , Oleg Nesterov Subject: Re: [4.2] commit d59cfc09c32 (sched, cgroup: replace signal_struct->group_rwsem with a global percpu_rwsem) causes regression for libvirt/kvm Message-ID: <20150915173836.GO4029@linux.vnet.ibm.com> Reply-To: paulmck@linux.vnet.ibm.com References: <55F8097A.7000206@de.ibm.com> <20150915130550.GC16853@twins.programming.kicks-ass.net> <55F81EE2.4090708@de.ibm.com> <55F84A6B.1010207@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <55F84A6B.1010207@redhat.com> User-Agent: Mutt/1.5.21 (2010-09-15) X-TM-AS-MML: disable X-Content-Scanned: Fidelis XPS MAILER x-cbid: 15091517-0013-0000-0000-000018609EA5 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, Sep 15, 2015 at 06:42:19PM +0200, Paolo Bonzini wrote: > > > On 15/09/2015 15:36, Christian Borntraeger wrote: > > I am wondering why the old code behaved in such fatal ways. Is there > > some interaction between waiting for a reschedule in the > > synchronize_sched writer and some fork code actually waiting for the > > read side to get the lock together with some rescheduling going on > > waiting for a lock that fork holds? lockdep does not give me an hints > > so I have no clue :-( > > It may just be consuming too much CPU usage. kernel/rcu/tree.c warns > about it: > > * if you are using synchronize_sched_expedited() in a loop, please > * restructure your code to batch your updates, and then use a single > * synchronize_sched() instead. > > and you may remember that in KVM we switched from RCU to SRCU exactly to > avoid userspace-controlled synchronize_rcu_expedited(). > > In fact, I would say that any userspace-controlled call to *_expedited() > is a bug waiting to happen and a bad idea---because userspace can, with > little effort, end up calling it in a loop. Excellent points! Other options in such situations include the following: o Rework so that the code uses call_rcu*() instead of *_expedited(). o Maintain a per-task or per-CPU counter so that every so many *_expedited() invocations instead uses the non-expedited counterpart. (For example, synchronize_rcu instead of synchronize_rcu_expedited().) Note that synchronize_srcu_expedited() is less troublesome than are the other *_expedited() functions, because synchronize_srcu_expedited() does not inflict OS jitter on other CPUs. This situation is being improved, so that the other *_expedited() functions inflict less OS jitter and (mostly) avoid inflicting OS jitter on nohz_full CPUs and idle CPUs (the latter being important for battery-powered systems). In addition, the *_expedited() functions avoid hammering CPUs with N-squared OS jitter in response to concurrent invocation from all CPUs because multiple concurrent *_expedited() calls will be satisfied by a single expedited grace-period operation. Nevertheless, as Paolo points out, it is still necessary to exercise caution when exposing synchronous grace periods to userspace control. Thanx, Paul