From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752696AbbIPMZq (ORCPT ); Wed, 16 Sep 2015 08:25:46 -0400 Received: from mx1.redhat.com ([209.132.183.28]:36270 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752311AbbIPMZn (ORCPT ); Wed, 16 Sep 2015 08:25:43 -0400 Date: Wed, 16 Sep 2015 14:22:49 +0200 From: Oleg Nesterov To: Paolo Bonzini Cc: Christian Borntraeger , paulmck@linux.vnet.ibm.com, Peter Zijlstra , Tejun Heo , Ingo Molnar , "linux-kernel@vger.kernel.org >> Linux Kernel Mailing List" , KVM list Subject: Re: [4.2] commit d59cfc09c32 (sched, cgroup: replace signal_struct->group_rwsem with a global percpu_rwsem) causes regression for libvirt/kvm Message-ID: <20150916122249.GA28821@redhat.com> References: <55F8097A.7000206@de.ibm.com> <20150915130550.GC16853@twins.programming.kicks-ass.net> <55F81EE2.4090708@de.ibm.com> <55F84A6B.1010207@redhat.com> <20150915173836.GO4029@linux.vnet.ibm.com> <55F92904.4090206@redhat.com> <55F92F04.1040706@de.ibm.com> <55F9326A.9070509@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <55F9326A.9070509@redhat.com> User-Agent: Mutt/1.5.18 (2008-05-17) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 09/16, Paolo Bonzini wrote: > > > On 16/09/2015 10:57, Christian Borntraeger wrote: > > Am 16.09.2015 um 10:32 schrieb Paolo Bonzini: > >> > >> > >> On 15/09/2015 19:38, Paul E. McKenney wrote: > >>> Excellent points! > >>> > >>> Other options in such situations include the following: > >>> > >>> o Rework so that the code uses call_rcu*() instead of *_expedited(). > >>> > >>> o Maintain a per-task or per-CPU counter so that every so many > >>> *_expedited() invocations instead uses the non-expedited > >>> counterpart. (For example, synchronize_rcu instead of > >>> synchronize_rcu_expedited().) > >> > >> Or just use ratelimit (untested): > > > > One of my tests was to always replace synchronize_sched_expedited with > > synchronize_sched and things turned out to be even worse. Not sure if > > it makes sense to test yopur in-the-middle approach? > > I don't think it applies here, since down_write/up_write is a > synchronous API. > > If the revert isn't easy, I think backporting rcu_sync is the best bet. I leave this to Paul and Tejun... at least I think this is not v4.2 material. > The issue is that rcu_sync doesn't eliminate synchronize_sched, Yes, but it eliminates _expedited(). This is good, but otoh this means that (say) individual __cgroup_procs_write() can take much more time. However, it won't block the readers and/or disturb the whole system. And percpu_up_write() doesn't do synchronize_sched() at all. > it only > makes it more rare. Yes, so we can hope that multiple __cgroup_procs_write()'s can "share" a single synchronize_sched(). > So it's possible that it isn't eliminating the root > cause of the problem. We will see... Just in case, currently the usage of percpu_down_write() is suboptimal. We do not need to do ->sync() under cgroup_mutex. But this needs some WIP changes in rcu_sync. Plus we can do more improvements, but this is off-topic right now. Oleg.