From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 664CBC433F5 for ; Mon, 30 May 2022 19:29:23 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id D5FA66B0071; Mon, 30 May 2022 15:29:22 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id D0EFA6B0072; Mon, 30 May 2022 15:29:22 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id BF7E66B0073; Mon, 30 May 2022 15:29:22 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id B05BA6B0071 for ; Mon, 30 May 2022 15:29:22 -0400 (EDT) Received: from smtpin06.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay12.hostedemail.com (Postfix) with ESMTP id 7FE06120F77 for ; Mon, 30 May 2022 19:29:22 +0000 (UTC) X-FDA: 79523398164.06.81DAD6E Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by imf21.hostedemail.com (Postfix) with ESMTP id 57F591C0049 for ; Mon, 30 May 2022 19:29:08 +0000 (UTC) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 092E760EE7; Mon, 30 May 2022 19:29:21 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 2EAF2C385B8; Mon, 30 May 2022 19:29:20 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1653938960; bh=idqKKXbx/E3isLNQEL00SKD3nzVRYsHGoE11alqF8pM=; h=Date:From:To:Cc:Subject:In-Reply-To:References:From; b=rS3Ya/tJ4jpchbK4aX7d3wiI3UVD6YdXmTJojVXz43YaylAuXpHEEEruxWY63qw5v cDyuDCWZD3/HsC64Hbl7guv+3q0WrlhRxO6Oj+avimOszEwJtcoRVM1rUAezb+hKl1 O6Jmu8LBK3lV60pn0VszRuShrdAocGpqWRlVLZ4bNZveHJEZQJdgnA7XCI9qFMLKl7 h+t07BUzrSKm70RPf7JkJIZvNZ+InU8PXkH0CrWvIKTnuQMya2oyOZfNMqCG/bdHm8 ejbkOb6dwJeajpWnip8KaCtuUO/cwfeJo6nG+5mVqcLQEP1D45nEWwkNlA4127Xz58 RiAC1GagYoadA== Date: Mon, 30 May 2022 12:29:18 -0700 From: Jakub Kicinski To: Chen Lin Cc: akpm@linux-foundation.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Alexander Duyck , netdev@vger.kernel.org Subject: Re: [PATCH v2] mm: page_frag: Warn_on when frag_alloc size is bigger than PAGE_SIZE Message-ID: <20220530122918.549ef054@kernel.org> In-Reply-To: <20220530122705.4e74bc1e@kernel.org> References: <20220529163029.12425c1e5286d7c7e3fe3708@linux-foundation.org> <1653917942-5982-1-git-send-email-chen45464546@163.com> <20220530122705.4e74bc1e@kernel.org> MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit X-Stat-Signature: s3ptzccp399d3gftugnss546iyaxam4r Authentication-Results: imf21.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b="rS3Ya/tJ"; dmarc=pass (policy=none) header.from=kernel.org; spf=pass (imf21.hostedemail.com: domain of kuba@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=kuba@kernel.org X-Rspam-User: X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: 57F591C0049 X-HE-Tag: 1653938948-324438 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Mon, 30 May 2022 12:27:05 -0700 Jakub Kicinski wrote: > diff --git a/mm/page_alloc.c b/mm/page_alloc.c > index e008a3df0485..360a545ee5e8 100644 > --- a/mm/page_alloc.c > +++ b/mm/page_alloc.c > @@ -5537,6 +5537,7 @@ EXPORT_SYMBOL(free_pages); > * sk_buff->head, or to be used in the "frags" portion of skb_shared_info. > */ > static struct page *__page_frag_cache_refill(struct page_frag_cache *nc, > + unsigned int fragsz, > gfp_t gfp_mask) > { > struct page *page = NULL; > @@ -5549,7 +5550,7 @@ static struct page *__page_frag_cache_refill(struct page_frag_cache *nc, > PAGE_FRAG_CACHE_MAX_ORDER); > nc->size = page ? PAGE_FRAG_CACHE_MAX_SIZE : PAGE_SIZE; > #endif > - if (unlikely(!page)) > + if (unlikely(!page && fragsz <= PAGE_SIZE)) > page = alloc_pages_node(NUMA_NO_NODE, gfp, 0); > > nc->va = page ? page_address(page) : NULL; > @@ -5576,7 +5577,7 @@ void *page_frag_alloc_align(struct page_frag_cache *nc, > > if (unlikely(!nc->va)) { > refill: > - page = __page_frag_cache_refill(nc, gfp_mask); > + page = __page_frag_cache_refill(nc, fragsz, gfp_mask); > if (!page) > return NULL; Oh, well, the reuse also needs an update. We can slap a similar condition next to the pfmemalloc check.