From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 67C92C369B1 for ; Wed, 16 Apr 2025 07:37:46 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id A9A3B2800C4; Wed, 16 Apr 2025 03:37:44 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id A22212800BD; Wed, 16 Apr 2025 03:37:44 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 89D522800C4; Wed, 16 Apr 2025 03:37:44 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 686002800BD for ; Wed, 16 Apr 2025 03:37:44 -0400 (EDT) Received: from smtpin17.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id 3E0EBAC454 for ; Wed, 16 Apr 2025 07:37:45 +0000 (UTC) X-FDA: 83339102490.17.E440652 Received: from szxga08-in.huawei.com (szxga08-in.huawei.com [45.249.212.255]) by imf06.hostedemail.com (Postfix) with ESMTP id 89FD918000A for ; Wed, 16 Apr 2025 07:37:42 +0000 (UTC) Authentication-Results: imf06.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf06.hostedemail.com: domain of linyunsheng@huawei.com designates 45.249.212.255 as permitted sender) smtp.mailfrom=linyunsheng@huawei.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1744789063; a=rsa-sha256; cv=none; b=lgeHp7FTuYszeWnw+9Dq/A6peroY83t8pIyuPubMV8nxJTtGjrmwyZch0xQgb61Un+dFlf OkrNGKEF73J3Gr8yfMPip7nW2mvn07t+ivJfldb8VGFyJX2w+9WJeW+b09hc854VIAJ3SF YBtGfwsblyVbMs01IhCKk7OqI+aISSI= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1744789063; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=y7R9QbcCqCVXc7AHVKUj/GqETcbWiYO62Dwkgl0GRWU=; b=MOCoVJFOx0KrwOCZCZmWlG+u6zsMYVcjnz5eorB2WIrhWlz8Z42R0zPg7UO/gz46PeLXNq jM9kexC5kQjtrnnEj+euGWnwodkDsJr+OD1D8qygkYWn9ENQxMFcgNFYIv5HtTcaT+b2ZD cbuXFNpHZX1l+tGS/1JasyT0TzRVnok= ARC-Authentication-Results: i=1; imf06.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf06.hostedemail.com: domain of linyunsheng@huawei.com designates 45.249.212.255 as permitted sender) smtp.mailfrom=linyunsheng@huawei.com Received: from mail.maildlp.com (unknown [172.19.163.48]) by szxga08-in.huawei.com (SkyGuard) with ESMTP id 4ZctBn03cVz1cyT0; Wed, 16 Apr 2025 15:36:49 +0800 (CST) Received: from dggpemf200006.china.huawei.com (unknown [7.185.36.61]) by mail.maildlp.com (Postfix) with ESMTPS id AAC77180080; Wed, 16 Apr 2025 15:37:38 +0800 (CST) Received: from [10.67.120.129] (10.67.120.129) by dggpemf200006.china.huawei.com (7.185.36.61) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.11; Wed, 16 Apr 2025 15:37:38 +0800 Message-ID: Date: Wed, 16 Apr 2025 15:37:38 +0800 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH 1/2] mm: page_frag: Check fragsz at the beginning of __page_frag_alloc_align() To: Haiyang Zhang , , , , , CC: , , , , , , , , References: <1743715309-318-1-git-send-email-haiyangz@microsoft.com> <1743715309-318-2-git-send-email-haiyangz@microsoft.com> Content-Language: en-US From: Yunsheng Lin In-Reply-To: <1743715309-318-2-git-send-email-haiyangz@microsoft.com> Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: 7bit X-Originating-IP: [10.67.120.129] X-ClientProxiedBy: dggems704-chm.china.huawei.com (10.3.19.181) To dggpemf200006.china.huawei.com (7.185.36.61) X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: 89FD918000A X-Stat-Signature: kgm3yoht5z1tiwcfjofci9gbommhuqru X-Rspam-User: X-HE-Tag: 1744789062-541963 X-HE-Meta: U2FsdGVkX18k1DJsD/xuuOF+AFevcCp9yDJom3Mtokf95IItkzyr8XQ7I623ys5eFW7OMhOh0xNQIsnCLONdA/fxeJ5+6Nj4guff6L/8nH6okzoJK/4/P73hMm8S8ApeJmjh5KG2NsurqwmIJSHtqpNu6yBDl4jl4nJy2TvaIU5fE/Gse0qqPovUsukzXZAoH4sbhtg0hcX+XMdAYXMJhEVQa25CrKR5aWLuDq0l0M5OsE1re+mbme2lhQa3KCqdgBnzttdFClrQWOoeM63e6V4SIb3kHOiJ5CYuLZINZvI6gC6N6yeWva/wrZgLjnCZv+rs/jH/bJArte4iLh9Ogl8FcdgyNRDuzzTBCeu+0rFx18qnv9WMXAtz78jtv7i9H86t93+u+jNGLxzTuR5sTD7eTm7KkBsZAv/Rm90Mh737WJqiPku8jtWyMGH2jADpBWwUYOvwj3Us5T/LZgUDvTjbT/JNtOQQ9fFih07i3u7HPj2ZeFzPr384iL/PfX9nO37Ob4+9tIW50CnFPAy0A2Hq25Q116vZw5FFo3jRGPGbngtPFPbm8a0FdvP7kJPaoIKHF/mEEI+2iLNeRG4QzHMCNA2YA1GLZ+l+/qO0BpvuV2KPnYhf6L9e/cBYzvIajLYzjL8y8xk+mlajLjujSMMzcBMtOMgWO74/blHdgP8UHl0HthZLK5gbDyEUlewLl4vF+g56mbss9VB27Wi44t89vjbAdwr0nYHedjyCurKWOxUp5qHT+0O52RtRx00j5uFcqARUS9JJ+JjAiOFcjpjsxHoC0zkUvzpid8ppewTMQJdlBeKHXOFPuHYReoNFsUxHzXvF0mezxUR8uktJXesIIT/WnVLc7rpRdWrpy/LcNujx1UOqr50YqQzwclpBv4g1xPrIzPPA0GiZ/fq2drBnmsqLsxLgempjsRN5bHx6E5456sHdaEZJfoKCVyFdqTDEkwaNKD768F0aoRr dDr/tDZK DN+ILPw1hkOOylKqqUx9vfC4FKSnC5073Exh8eoarSkH8gS/eILFE4Sa5pu9gg20v27EllxGI1bbkbicN6DHhGMbGKp0Q7Y1X+w358ixbNr+Tx5aUdX5gxo7E1L1kE0ZXWcI0ctWypKFKabIPysngSaaB2gBzMHhEvoMCInXOqLAI7S3nXrlMuOK1JSfiOT6AAS5/mN+WhMC2z4En4BIkmNLGEfyCBzSlVVq9Iz3uOmZT26k= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On 2025/4/4 5:21, Haiyang Zhang wrote: > Frag allocator is not designed for fragsz > PAGE_SIZE. So, check and return > the error at the beginning of __page_frag_alloc_align(), instead of > succeed for a few times, then fail due to not refilling the cache. > > Signed-off-by: Haiyang Zhang > --- > mm/page_frag_cache.c | 22 +++++++++------------- > 1 file changed, 9 insertions(+), 13 deletions(-) > > diff --git a/mm/page_frag_cache.c b/mm/page_frag_cache.c > index d2423f30577e..d6bf022087e7 100644 > --- a/mm/page_frag_cache.c > +++ b/mm/page_frag_cache.c > @@ -98,6 +98,15 @@ void *__page_frag_alloc_align(struct page_frag_cache *nc, > unsigned int size, offset; > struct page *page; > > + if (unlikely(fragsz > PAGE_SIZE)) { > + /* > + * The caller is trying to allocate a fragment > + * with fragsz > PAGE_SIZE which is not supported > + * by design. So we simply return NULL here. > + */ > + return NULL; > + } The checking is done at below to avoid doing the checking for the likely case of cache being enough as the frag API is mostly used to allocate small memory. And it seems my recent refactoring to frag API have made two frag API misuse more obvious if I recalled it correctly. If more explicit about that for all the codepath is really helpful, perhaps VM_BUG_ON() is an option to make it more explicit while avoiding the checking as much as possible. > + > if (unlikely(!encoded_page)) { > refill: > page = __page_frag_cache_refill(nc, gfp_mask); > @@ -119,19 +128,6 @@ void *__page_frag_alloc_align(struct page_frag_cache *nc, > size = PAGE_SIZE << encoded_page_decode_order(encoded_page); > offset = __ALIGN_KERNEL_MASK(nc->offset, ~align_mask); > if (unlikely(offset + fragsz > size)) { > - if (unlikely(fragsz > PAGE_SIZE)) { > - /* > - * The caller is trying to allocate a fragment > - * with fragsz > PAGE_SIZE but the cache isn't big > - * enough to satisfy the request, this may > - * happen in low memory conditions. > - * We don't release the cache page because > - * it could make memory pressure worse > - * so we simply return NULL here. > - */ > - return NULL; > - } > - > page = encoded_page_decode_page(encoded_page); > > if (!page_ref_sub_and_test(page, nc->pagecnt_bias))