From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-10.2 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_SANE_1 autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2567DC2B9F8 for ; Tue, 25 May 2021 12:35:54 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 635EA6141B for ; Tue, 25 May 2021 12:35:53 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 635EA6141B Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=techsingularity.net Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id EA8176B006C; Tue, 25 May 2021 08:35:51 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id E30AA6B006E; Tue, 25 May 2021 08:35:51 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id CA9006B0070; Tue, 25 May 2021 08:35:51 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0238.hostedemail.com [216.40.44.238]) by kanga.kvack.org (Postfix) with ESMTP id 94DEA6B006C for ; Tue, 25 May 2021 08:35:51 -0400 (EDT) Received: from smtpin13.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 112645DF3 for ; Tue, 25 May 2021 12:35:51 +0000 (UTC) X-FDA: 78179700102.13.EFBBBC5 Received: from outbound-smtp49.blacknight.com (outbound-smtp49.blacknight.com [46.22.136.233]) by imf27.hostedemail.com (Postfix) with ESMTP id 305FF801912C for ; Tue, 25 May 2021 12:35:44 +0000 (UTC) Received: from mail.blacknight.com (pemlinmail06.blacknight.ie [81.17.255.152]) by outbound-smtp49.blacknight.com (Postfix) with ESMTPS id 32844FAC0C for ; Tue, 25 May 2021 13:35:39 +0100 (IST) Received: (qmail 9730 invoked from network); 25 May 2021 12:35:38 -0000 Received: from unknown (HELO techsingularity.net) (mgorman@techsingularity.net@[84.203.23.168]) by 81.17.254.9 with ESMTPSA (AES256-SHA encrypted, authenticated); 25 May 2021 12:35:38 -0000 Date: Tue, 25 May 2021 13:35:36 +0100 From: Mel Gorman To: Vlastimil Babka Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Christoph Lameter , David Rientjes , Pekka Enberg , Joonsoo Kim , Sebastian Andrzej Siewior , Thomas Gleixner , Jesper Dangaard Brouer , Peter Zijlstra , Jann Horn Subject: Re: [RFC 09/26] mm, slub: move disabling/enabling irqs to ___slab_alloc() Message-ID: <20210525123536.GR30378@techsingularity.net> References: <20210524233946.20352-1-vbabka@suse.cz> <20210524233946.20352-10-vbabka@suse.cz> MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-15 Content-Disposition: inline In-Reply-To: <20210524233946.20352-10-vbabka@suse.cz> User-Agent: Mutt/1.10.1 (2018-07-13) Authentication-Results: imf27.hostedemail.com; dkim=none; dmarc=none; spf=pass (imf27.hostedemail.com: domain of mgorman@techsingularity.net designates 46.22.136.233 as permitted sender) smtp.mailfrom=mgorman@techsingularity.net X-Rspamd-Server: rspam01 X-Rspamd-Queue-Id: 305FF801912C X-Stat-Signature: 3d4q8woma3x3fnecz5zpb1ossxmjf1u8 X-HE-Tag: 1621946144-641310 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Tue, May 25, 2021 at 01:39:29AM +0200, Vlastimil Babka wrote: > Currently __slab_alloc() disables irqs around the whole ___slab_alloc(). This > includes cases where this is not needed, such as when the allocation ends up in > the page allocator and has to awkwardly enable irqs back based on gfp flags. > Also the whole kmem_cache_alloc_bulk() is executed with irqs disabled even when > it hits the __slab_alloc() slow path, and long periods with disabled interrupts > are undesirable. > > As a first step towards reducing irq disabled periods, move irq handling into > ___slab_alloc(). Callers will instead prevent the s->cpu_slab percpu pointer > from becoming invalid via migrate_disable(). This does not protect against > access preemption, which is still done by disabled irq for most of > ___slab_alloc(). As the small immediate benefit, slab_out_of_memory() call from > ___slab_alloc() is now done with irqs enabled. > > kmem_cache_alloc_bulk() disables irqs for its fastpath and then re-enables them > before calling ___slab_alloc(), which then disables them at its discretion. The > whole kmem_cache_alloc_bulk() operation also disables cpu migration. > > When ___slab_alloc() calls new_slab() to allocate a new page, re-enable > preemption, because new_slab() will re-enable interrupts in contexts that allow > blocking. > > The patch itself will thus increase overhead a bit due to disabled migration > and increased disabling/enabling irqs in kmem_cache_alloc_bulk(), but that will > be gradually improved in the following patches. > > Signed-off-by: Vlastimil Babka Why did you use migrate_disable instead of preempt_disable? There is a fairly large comment in include/linux/preempt.h on why migrate_disable is undesirable so new users are likely to be put under the microscope once Thomas or Peter notice it. I think you are using it so that an allocation request can be preempted by a higher priority task but given that the code was disabling interrupts, there was already some preemption latency. However, migrate_disable is more expensive than preempt_disable (function call versus a simple increment). On that basis, I'd recommend starting with preempt_disable and only using migrate_disable if necessary. Bonus points for adding a comment where ___slab_alloc disables IRQs to clarify what is protected -- I assume it's protecting kmem_cache_cpu from being modified from interrupt context. If so, it's potentially a local_lock candidate. -- Mel Gorman SUSE Labs