From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2EE87C7619A for ; Wed, 12 Apr 2023 15:32:03 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230205AbjDLPcB (ORCPT ); Wed, 12 Apr 2023 11:32:01 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34986 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229826AbjDLPcA (ORCPT ); Wed, 12 Apr 2023 11:32:00 -0400 Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:3::133]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B4ACC65A6; Wed, 12 Apr 2023 08:31:48 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=In-Reply-To:Content-Type:MIME-Version :References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=jVIUX30CHSeuMRIZ4MbX/VFPgD+9bUpANKvI3RmLVRc=; b=y5JRgZIJ8RY6GdBqYVEE/YVMEp dqmS2vYU/39+lDvSEZChJo8KFFmv8H+CObQ8w7xEEbiDMJhtF/yb+Sl/X5fUSnZLlwk5OQrb+Uu7o ZkIPfe/2ZZeZXXOqAzImp8FRwvcc9X0cS72YSeedaNoOOZNDB5A+r6WbRAOn5ceLv6Me3d41Soyoe BJBLTcoAnPbOOhKgq5tMK0+vkFJBKfg8+/OO9XGGllNz7hfJ5qiiEcg6g9PP3Or4q28T5+OLNIggI i48okYGNvg3shRsABwU6JgMeVEKYWVJtUTT6ZRFivxtjHE15fUaUidKVR+Nc1Uxpghun2l+zXXadg v88nBQMw==; Received: from hch by bombadil.infradead.org with local (Exim 4.96 #2 (Red Hat Linux)) id 1pmcRi-003dp9-1Q; Wed, 12 Apr 2023 15:31:42 +0000 Date: Wed, 12 Apr 2023 08:31:42 -0700 From: Christoph Hellwig To: David Howells Cc: netdev@vger.kernel.org, "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Willem de Bruijn , David Ahern , Matthew Wilcox , Al Viro , Christoph Hellwig , Jens Axboe , Jeff Layton , Christian Brauner , Chuck Lever III , Linus Torvalds , linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, Jeroen de Borst , Catherine Sullivan , Shailend Chand , Felix Fietkau , John Crispin , Sean Wang , Mark Lee , Lorenzo Bianconi , Matthias Brugger , AngeloGioacchino Del Regno , Keith Busch , Jens Axboe , Christoph Hellwig , Sagi Grimberg , Chaitanya Kulkarni , Andrew Morton , linux-arm-kernel@lists.infradead.org, linux-mediatek@lists.infradead.org, linux-nvme@lists.infradead.org Subject: Re: [PATCH net-next v6 04/18] mm: Make the page_frag_cache allocator use per-cpu Message-ID: References: <20230411160902.4134381-1-dhowells@redhat.com> <20230411160902.4134381-5-dhowells@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20230411160902.4134381-5-dhowells@redhat.com> X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org On Tue, Apr 11, 2023 at 05:08:48PM +0100, David Howells wrote: > Make the page_frag_cache allocator have a separate allocation bucket for > each cpu to avoid racing. This means that no lock is required, other than > preempt disablement, to allocate from it, though if a softirq wants to > access it, then softirq disablement will need to be added. Can you show any performance numbers? > Make the NVMe, mediatek and GVE drivers pass in NULL to page_frag_cache() > and use the default allocation buckets rather than defining their own. Let me ask a third time as I've not got an answer the last two times: why are these callers treated different from the others?