From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.1 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI, MENTIONS_GIT_HOSTING,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 680EDC433E2 for ; Thu, 16 Jul 2020 13:36:52 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 2DDBB207DD for ; Thu, 16 Jul 2020 13:36:52 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=joelfernandes.org header.i=@joelfernandes.org header.b="voqqdiqO" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 2DDBB207DD Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=joelfernandes.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id A375D8D001E; Thu, 16 Jul 2020 09:36:51 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 9E7938D0012; Thu, 16 Jul 2020 09:36:51 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 8AEE88D001E; Thu, 16 Jul 2020 09:36:51 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0202.hostedemail.com [216.40.44.202]) by kanga.kvack.org (Postfix) with ESMTP id 752788D0012 for ; Thu, 16 Jul 2020 09:36:51 -0400 (EDT) Received: from smtpin11.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 0F5C21EE6 for ; Thu, 16 Jul 2020 13:36:51 +0000 (UTC) X-FDA: 77044039422.11.cream91_390378426f02 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin11.hostedemail.com (Postfix) with ESMTP id C6DCC180F8B86 for ; Thu, 16 Jul 2020 13:36:50 +0000 (UTC) X-HE-Tag: cream91_390378426f02 X-Filterd-Recvd-Size: 5809 Received: from mail-qv1-f65.google.com (mail-qv1-f65.google.com [209.85.219.65]) by imf02.hostedemail.com (Postfix) with ESMTP for ; Thu, 16 Jul 2020 13:36:50 +0000 (UTC) Received: by mail-qv1-f65.google.com with SMTP id h18so2687690qvl.3 for ; Thu, 16 Jul 2020 06:36:50 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=joelfernandes.org; s=google; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to; bh=6aq1gxLlCQK8XhsdwZ3YqrWIC5XXHCM06bHTQ9xWYQc=; b=voqqdiqO2d2c7Lb6zYlFDZcOuUpzAlEMW0o94zl8C+kidNqA1TSDQyciGHybaUX7NB dvypIDKKqk6APBuPrfEjnM2ZeUbiAkToqE76Xd/m/KsokB6GVBcoPaKj+RlhJJeth63v Go3xiF3ToRSz3qlKoKlu3sBhW2rMlz2M/pBBw= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to; bh=6aq1gxLlCQK8XhsdwZ3YqrWIC5XXHCM06bHTQ9xWYQc=; b=Nr6/tKhneW0gsu9SIj3Y7VnbkEcqAAgyX53TbbhMzI6pKoSjLZkQTsA2uKeAsSoKHG a5bsXqDumCK6B2Yf0005/xX0f5wNa09bRwB6YzkWWAcSePYQ6bX3OBPNA8lnzGnP8kJp LB2wpwSP0bXjYP2/H+NwShB5VSpcHmvH3uinaj3P5W4Bzv8aRG8ADVyJnbpc0fgd8NF0 7duCidvpdfJchm57jcNrzqSDGQiQGJfilRuivsyTY6VDylfEdTNRvSKMGBdi4qmj9iQH DN+PlpeAMbzBtb4f7lFg62Es2eGcBGzlfNWqnJXgwtaHUNNUkJ29U5zU4GkrgqVvvok9 Ta5g== X-Gm-Message-State: AOAM533Vd437eMx99Ed9lAhhtpy6ju0JB/RJmEei8hKOyIrzAh2EOrGN glbXcY5DvrjcpRMgrX/IWht15Q== X-Google-Smtp-Source: ABdhPJyvmYE8tl+5jdGs3PA4YDZHp+glxE8uFXeEo/ltW9QL85Nyr6Lvai+kPlsNjjTSjr4iipqBYg== X-Received: by 2002:a0c:e80c:: with SMTP id y12mr4338586qvn.146.1594906609571; Thu, 16 Jul 2020 06:36:49 -0700 (PDT) Received: from localhost ([2620:15c:6:12:cad3:ffff:feb3:bd59]) by smtp.gmail.com with ESMTPSA id x13sm7378525qts.57.2020.07.16.06.36.48 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 16 Jul 2020 06:36:48 -0700 (PDT) Date: Thu, 16 Jul 2020 09:36:47 -0400 From: Joel Fernandes To: Uladzislau Rezki Cc: Sebastian Andrzej Siewior , LKML , RCU , linux-mm , "Paul E . McKenney" , Andrew Morton , "Theodore Y . Ts'o" , Matthew Wilcox , Oleksiy Avramchenko Subject: Re: [PATCH 1/1] rcu/tree: Drop the lock before entering to page allocator Message-ID: <20200716133647.GA242690@google.com> References: <20200715183537.4010-1-urezki@gmail.com> <20200715185628.7b4k3o5efp4gnbla@linutronix.de> <20200716091913.GA28595@pc636> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20200716091913.GA28595@pc636> X-Rspamd-Queue-Id: C6DCC180F8B86 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam02 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Thu, Jul 16, 2020 at 11:19:13AM +0200, Uladzislau Rezki wrote: > On Wed, Jul 15, 2020 at 07:13:33PM -0400, Joel Fernandes wrote: > > On Wed, Jul 15, 2020 at 2:56 PM Sebastian Andrzej Siewior > > wrote: > > > > > > On 2020-07-15 20:35:37 [+0200], Uladzislau Rezki (Sony) wrote: > > > > @@ -3306,6 +3307,9 @@ kvfree_call_rcu_add_ptr_to_bulk(struct kfree_rcu_cpu *krcp, void *ptr) > > > > if (IS_ENABLED(CONFIG_PREEMPT_RT)) > > > > return false; > > > > > > > > + preempt_disable(); > > > > + krc_this_cpu_unlock(*krcp, *flags); > > > > > > Now you enter memory allocator with disabled preemption. This isn't any > > > better but we don't have a warning for this yet. > > > What happened to the part where I asked for a spinlock_t? > > > > Ulad, > > Wouldn't the replacing of preempt_disable() with migrate_disable() > > above resolve Sebastian's issue? > > > This for regular kernel only. That means that migrate_disable() is > equal to preempt_disable(). So, no difference. But this will force preempt_disable() context into the low-level page allocator on -RT kernels which I believe is not what Sebastian wants. The whole reason why the spinlock vs raw-spinlock ordering matters is, because on RT, the spinlock is sleeping. So if you have: raw_spin_lock(..); spin_lock(..); <-- can sleep on RT, so Sleep while atomic (SWA) violation. That's the main reason you are dropping the lock before calling the allocator. But if you call preempt_disable() and drop the lock, then that may fix the lock ordering violation only to reintroduce the SWA issue, which is why you wanted to drop the lock in the first place. migrate_disable() on -RT kernels will not disable preemption which is where Sebastian is coming from I think: https://git.kernel.org/pub/scm/linux/kernel/git/rt/linux-stable-rt.git/tree/kernel/sched/core.c?h=v5.4-rt#n8178 In your defense, the "don't disable preemption on migrate_disable()" on -RT part is not upstream yet. So maybe this will not work on upstream PREEMPT_RT, but I'm not sure whether folks are really running upstream PREEMPT_RT without going through the RT tree. I did not see the drawback of just keeping it as migrate_disable() which should make everyone happy but Sebastian may have other opinions such as maybe that converting the lock to normal spinlock makes the code work on upstream's version of PREEMPT_RT without requiring the migrate_disable() change. But he can elaborate more. Paul, how does all this sound to you? thanks, - Joel