From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752786AbbIPMqp (ORCPT ); Wed, 16 Sep 2015 08:46:45 -0400 Received: from mx1.redhat.com ([209.132.183.28]:53949 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752110AbbIPMqo (ORCPT ); Wed, 16 Sep 2015 08:46:44 -0400 Date: Wed, 16 Sep 2015 14:43:48 +0200 From: Oleg Nesterov To: Paolo Bonzini Cc: Christian Borntraeger , paulmck@linux.vnet.ibm.com, Peter Zijlstra , Tejun Heo , Ingo Molnar , Linux Kernel Mailing List , KVM list Subject: Re: [4.2] commit d59cfc09c32 (sched, cgroup: replace signal_struct->group_rwsem with a global percpu_rwsem) causes regression for libvirt/kvm Message-ID: <20150916124348.GA30646@redhat.com> References: <55F8097A.7000206@de.ibm.com> <20150915130550.GC16853@twins.programming.kicks-ass.net> <55F81EE2.4090708@de.ibm.com> <55F84A6B.1010207@redhat.com> <20150915173836.GO4029@linux.vnet.ibm.com> <55F92904.4090206@redhat.com> <55F92F04.1040706@de.ibm.com> <55F9326A.9070509@redhat.com> <20150916122249.GA28821@redhat.com> <55F9621A.7070803@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <55F9621A.7070803@redhat.com> User-Agent: Mutt/1.5.18 (2008-05-17) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 09/16, Paolo Bonzini wrote: > > > On 16/09/2015 14:22, Oleg Nesterov wrote: > > > The issue is that rcu_sync doesn't eliminate synchronize_sched, > > > > Yes, but it eliminates _expedited(). This is good, but otoh this means > > that (say) individual __cgroup_procs_write() can take much more time. > > However, it won't block the readers and/or disturb the whole system. > > According to Christian, removing the _expedited() "makes things worse" Yes sure, we can not just remove _expedited() from down/up_read(). > in that the system takes ages to boot up and systemd timeouts. Yes, this is clear > So I'm > still a bit wary about anything that uses RCU for the cgroups write side. > > However, rcu_sync is okay with him, so perhaps it is really really > effective. Christian, can you instrument how many synchronize_sched > (out of the 6479 cgroup_procs_write calls) are actually executed at boot > with the rcu rework? Heh, another change I have in mind. It would be nice to add some trace points. But firstly we should merge the current code. Oleg.