From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.3 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_SANE_2 autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B4845C4727D for ; Thu, 24 Sep 2020 18:58:19 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 283DC238A1 for ; Thu, 24 Sep 2020 18:58:18 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 283DC238A1 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=goodmis.org Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=intel-gfx-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 929F66EB50; Thu, 24 Sep 2020 18:58:18 +0000 (UTC) Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by gabe.freedesktop.org (Postfix) with ESMTPS id CEAAA6EB50; Thu, 24 Sep 2020 18:58:17 +0000 (UTC) Received: from rorschach.local.home (cpe-66-24-58-225.stny.res.rr.com [66.24.58.225]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 8D88323600; Thu, 24 Sep 2020 18:58:12 +0000 (UTC) Date: Thu, 24 Sep 2020 14:58:10 -0400 From: Steven Rostedt To: Thomas Gleixner Message-ID: <20200924145810.2f0b806f@rorschach.local.home> In-Reply-To: <875z8383gh.fsf@nanos.tec.linutronix.de> References: <20200919091751.011116649@linutronix.de> <87mu1lc5mp.fsf@nanos.tec.linutronix.de> <87k0wode9a.fsf@nanos.tec.linutronix.de> <87eemwcpnq.fsf@nanos.tec.linutronix.de> <87a6xjd1dw.fsf@nanos.tec.linutronix.de> <87sgbbaq0y.fsf@nanos.tec.linutronix.de> <20200923084032.GU1362448@hirez.programming.kicks-ass.net> <20200923115251.7cc63a7e@oasis.local.home> <874kno9pr9.fsf@nanos.tec.linutronix.de> <20200923171234.0001402d@oasis.local.home> <871riracgf.fsf@nanos.tec.linutronix.de> <20200924083241.314f2102@gandalf.local.home> <875z8383gh.fsf@nanos.tec.linutronix.de> X-Mailer: Claws Mail 3.17.4git76 (GTK+ 2.24.32; x86_64-pc-linux-gnu) MIME-Version: 1.0 Subject: Re: [Intel-gfx] [patch RFC 00/15] mm/highmem: Provide a preemptible variant of kmap_atomic & friends X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Juri Lelli , peterz@infradead.org, Benjamin Herrenschmidt , Sebastian Andrzej Siewior , dri-devel , linux-mips@vger.kernel.org, Ben Segall , Max Filippov , Guo Ren , linux-sparc , Vincent Chen , Will Deacon , Ard Biesheuvel , linux-arch , Herbert Xu , Michael Ellerman , the arch/x86 maintainers , Russell King , linux-csky@vger.kernel.org, David Airlie , Mel Gorman , "open list:SYNOPSYS ARC ARCHITECTURE" , linux-xtensa@linux-xtensa.org, Paul McKenney , intel-gfx , linuxppc-dev , Greentime Hu , Dietmar Eggemann , Linux ARM , Chris Zankel , Michal Simek , Thomas Bogendoerfer , Nick Hu , Linux-MM , Linus Torvalds , LKML , Arnd Bergmann , Vineet Gupta , Paul Mackerras , Andrew Morton , Daniel Bristot de Oliveira , "David S. Miller" Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" On Thu, 24 Sep 2020 19:55:10 +0200 Thomas Gleixner wrote: > On Thu, Sep 24 2020 at 08:32, Steven Rostedt wrote: > > On Thu, 24 Sep 2020 08:57:52 +0200 > > Thomas Gleixner wrote: > > > >> > Now as for migration disabled nesting, at least now we would have > >> > groupings of this, and perhaps the theorists can handle that. I mean, > >> > how is this much different that having a bunch of tasks blocked on a > >> > mutex with the owner is pinned on a CPU? > >> > > >> > migrate_disable() is a BKL of pinning affinity. > >> > >> No. That's just wrong. preempt disable is a concurrency control, > > > > I think you totally misunderstood what I was saying. The above wasn't about > > comparing preempt_disable to migrate_disable. It was comparing > > migrate_disable to a chain of tasks blocked on mutexes where the top owner > > has preempt_disable set. You still have a bunch of tasks that can't move to > > other CPUs. > > What? The top owner does not prevent any task from moving. The tasks > cannot move because they are blocked on the mutex, which means they are > not runnable and non runnable tasks are not migrated at all. And neither are migrated disabled tasks preempted by a high priority task. > > I really don't understand what you are trying to say. Don't worry about it. I was just making a high level comparison of how migrate disabled tasks blocked on a higher priority task is similar to that of tasks blocked on a mutex held by a pinned task that is preempted by a high priority task. But we can forget this analogy as it's not appropriate for the current conversation. > > >> > If we only have local_lock() available (even on !RT), then it makes > >> > the blocking in groups. At least this way you could grep for all the > >> > different local_locks in the system and plug that into the algorithm > >> > for WCS, just like one would with a bunch of mutexes. > >> > >> You cannot do that on RT at all where migrate disable is substituting > >> preempt disable in spin and rw locks. The result would be the same as > >> with a !RT kernel just with horribly bad performance. > > > > Note, the spin and rwlocks already have a lock associated with them. Why > > would it be any different on RT? I wasn't suggesting adding another lock > > inside a spinlock. Why would I recommend THAT? I wasn't recommending > > blindly replacing migrate_disable() with local_lock(). I just meant expose > > local_lock() but not migrate_disable(). > > We already exposed local_lock() to non RT and it's for places which do > preempt_disable() or local_irq_disable() without having a lock > associated. But both primitives are scope less and therefore behave like > CPU local BKLs. What local_lock() provides in these cases is: > > - Making the protection scope clear by associating a named local > lock which is coverred by lockdep. > > - It still maps to preempt_disable() or local_irq_disable() in !RT > kernels > > - The scope and the named lock allows RT kernels to substitute with > real (recursion aware) locking primitives which keep preemption and > interupts enabled, but provide the fine grained protection for the > scoped critical section. I'm very much aware of the above. > > So how would you substitute migrate_disable() with a local_lock()? You > can't. Again migrate_disable() is NOT a concurrency control and > therefore it cannot be substituted by any concurrency control primitive. When I was first writing my email, I was writing about a way to replace migrate_disable with a construct similar to local locks without actually mentioning local locks, but then rewrote it to state local locks, trying to simplify what I was writing. I shouldn't have done that, because it portrayed that I wanted to use local_lock() unmodified. I was actually thinking of a new construct that was similar but not exactly the same as local lock. But this will just make things more complex and we can forget about it. I'll wait to see what Peter produces. -- Steve _______________________________________________ Intel-gfx mailing list Intel-gfx@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/intel-gfx