From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mail-wm1-f43.google.com (mail-wm1-f43.google.com [209.85.128.43]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 15D0C1FBEA8 for ; Tue, 28 Apr 2026 17:09:00 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.43 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1777396141; cv=none; b=O0yHZp4TS8HpEdUEf6Uz/l9XfdK6TA71VxwNHA2aUbwzdJR0EUvhlq4bUkqP8WFASI4upzUYnNR9pvfZIneUD/coJjrYyuurPlCM4+tk3VdgjZyJiFX5UTFQemfLD4jwxUHeCDNDlpHq7EgclBqAOH6KQzYCpFMKI4ML1uoYgVM= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1777396141; c=relaxed/simple; bh=8YLyXNnB0oYjgxp9TiGj9qYD9V/WfHzuABJsa8B7mgA=; h=From:To:Cc:Subject:Date:Message-ID:MIME-Version; b=ovbHLjvwqR8AjXr5vScFKjZJKsOZ64uqHeqWbSF6xJtaU0I4lseAoNvjPZdayEKasxSGyLVCEpX7mRwXRlG3ucPPGITsJY4if5cWCnyt15wQZXAQcfIrcVKCv5F+/6L7lrU1DNqBBgPXWNfruFsECPZv/Ijcvn8beD8IqpGC7Pg= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=SCEoS+cb; arc=none smtp.client-ip=209.85.128.43 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="SCEoS+cb" Received: by mail-wm1-f43.google.com with SMTP id 5b1f17b1804b1-48374014a77so150665495e9.3 for ; Tue, 28 Apr 2026 10:08:59 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20251104; t=1777396138; x=1778000938; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:from:to:cc:subject:date:message-id:reply-to; bh=hEFzS30I0lxmmb31xtJed7NArW5qRhMX/2eez5xcs/I=; b=SCEoS+cbU6ObFK6WGRhzkb6PpPXU+9cyRJEa+HN+OLzMlvSC43z3ce9FqL7E2NBvJ6 3umfXA2UyhyV3Z/ZI5w9EA8elMb3psSUY3CKK67YgXyljZSJL9ENsR6WrHbdmByM6rUu a19XT626XFFl6dCrd5etugSvDE+J9TjNmOSj57kULA6I0AzWfmfhdxCNzhIdbj/EyW+N KrrEM/lNaTShp4d4U2RqUcTrccj5AIAp/naUp9NieXROI0VVY8g03ndyIfLB1mKLDnUY oeHEIFVIM0Z8CEQV6xkmxJ2Mef4lWW0Y9yTqXTstHPNnMMwYXQwoalhpRSXKkRyGQnP9 mcyQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20251104; t=1777396138; x=1778000938; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:x-gm-gg:x-gm-message-state:from:to:cc:subject:date :message-id:reply-to; bh=hEFzS30I0lxmmb31xtJed7NArW5qRhMX/2eez5xcs/I=; b=g1SQr7i4rBCAMCsoT8k2gUto8qcsy7es3um5VWxkJMZqhm5aFwVrnbkSVJugZgmciX aCGfJZfdhuA/zGlE5e9+Uszo6SYiWH9JRaOh09SCB5ErwKsQCutDJr6JElJqqNBksXNo hm/Hn3d6ZCj/spWJRyJy5th1c8ouNDmwMBFLxF7NCRTcwOL1UfN3axi2NSOaxnmG8Btz VjQWpAcNDTZu5mEAIpNc+ANa16FTakGeuvM74VfBWDJZHsMRDqfb8tiC97OpIYAwBBiM qXqgI2KJ7Y+hCIfO+dwyfO/A2kBU7p14OwajJ5DF93UpnlpmejcJRO93yloKiVd78FJO iK2A== X-Forwarded-Encrypted: i=1; AFNElJ9QnAeu/o1ceLpA10wnCbDFU4Rb2gxs9B7aimanSwQFzp/+2VLttFdCJ93/f4tx+B0WLoF41R2U+TDXmY0=@vger.kernel.org X-Gm-Message-State: AOJu0YxhqVCj3yyRjbbXs1mv+BwZxZ1U/fFHD/rptAWSPgj6cdityvsJ RQO5VHO0XBygANsDOHrHn0x/8rvQYqaGup9oRAsSwiu1OqpajY3oob2c X-Gm-Gg: AeBDievdmqYGfUB/ISpJl9rTJr2vALmuPExeX2cIAwbUrP6V/jc7r0uKuHrmHCivHQq zI6DH4gSSiZDzdPZyp77gIWaHVKbSRUpinZvoK2TQsSMATsR2cFCuk5D/XsnFdJJFczssjkRW4Z VpPU8D5rIdRPgKvPu3laqlqc256LP0NmXaFU/RUeKnYyv8EB8LBTZuJ2eheaV9o4b9q3BGiI98x sqpMZPto/vqc8qLPPJaEdlu8XfXy5DMQq6U1N9KYmZHMlsfrDnrAEfL3PcSWd0X/TnIurncaMcC W6iEyyp/xi4IHcP9L5xQPmTETIrYLZ6gFfKK/bbQUk97BTxRXURZy8ctTecm4WwwQlZp8GpmdXS cXDzmnCIfvX7kV2tjeJrIkv9jT74USJJ/I42CuxzU5XkB/ETbwHXsPsn6L8a3kQ3MrDcN4mExnU LC6lAMx9PGETFI3KOYezOiqsCU/8TpNcz8DZRGcyTp0w5dGitXCydv0NK+/A1+ZN+LHJSGMLWhn CLg/AMdOfbiUyRESHJC8w== X-Received: by 2002:a05:600c:630c:b0:486:d76c:fa57 with SMTP id 5b1f17b1804b1-48a77b0a60emr66577485e9.17.1777396138318; Tue, 28 Apr 2026 10:08:58 -0700 (PDT) Received: from localhost.localdomain (200.178.141.77.rev.sfr.net. [77.141.178.200]) by smtp.gmail.com with ESMTPSA id ffacd0b85a97d-4463d02f2a2sm7943324f8f.10.2026.04.28.10.08.57 (version=TLS1_3 cipher=TLS_CHACHA20_POLY1305_SHA256 bits=256/256); Tue, 28 Apr 2026 10:08:57 -0700 (PDT) From: Hasan Basbunar To: hawk@kernel.org, ilias.apalodimas@linaro.org, kuba@kernel.org, pabeni@redhat.com, edumazet@google.com, davem@davemloft.net Cc: horms@kernel.org, almasrymina@google.com, asml.silence@gmail.com, kaiyuanz@google.com, willemb@google.com, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, Hasan Basbunar Subject: [PATCH] page_pool: fix memory-provider leak in page_pool_create_percpu() error path Date: Tue, 28 Apr 2026 19:07:39 +0200 Message-ID: <20260428170739.34881-1-basbunarhasan@gmail.com> X-Mailer: git-send-email 2.53.0 Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit When page_pool_create_percpu() fails on page_pool_list(), it falls through to its err_uninit: label, which calls page_pool_uninit(). At that point page_pool_init() has already taken two references when the user requested PP_FLAG_ALLOW_UNREADABLE_NETMEM: pool->mp_ops->init(pool) static_branch_inc(&page_pool_mem_providers); Neither is undone by page_pool_uninit(); both are only undone by __page_pool_destroy() (success-side teardown). The error path therefore leaks the per-provider reference taken by mp_ops->init (io_zcrx_ifq->refs in the io_uring zcrx provider, the dmabuf binding refcount in the devmem provider) plus one increment of the page_pool_mem_providers static branch on every failure of xa_alloc_cyclic() inside page_pool_list(). The leaked io_zcrx_ifq->refs in turn pins everything io_zcrx_ifq_free() would release on cleanup: ifq->user (uid), ifq->mm_account (mmdrop), ifq->dev (device refcount), ifq->netdev_tracker (netdev refcount), and the rbuf region. The leaked static branch increment forces all subsequent page_pool_alloc_netmems() and page_pool_return_page() callers to take the slow mp_ops branch for the lifetime of the kernel. Reachable via the io_uring zcrx path: io_uring_register(IORING_REGISTER_ZCRX_IFQ) /* CAP_NET_ADMIN */ -> __io_uring_register -> io_register_zcrx -> zcrx_register_netdev -> netif_mp_open_rxq -> driver ndo_queue_mem_alloc -> page_pool_create_percpu -> page_pool_init succeeds (mp_ops->init runs, branch++) -> page_pool_list fails (xa_alloc_cyclic -ENOMEM) -> goto err_uninit <-- leak The same shape applies to the devmem dmabuf provider via mp_dmabuf_devmem_init()/mp_dmabuf_devmem_destroy(). Restore the cleanup symmetry by moving the mp_ops->destroy() and static_branch_dec() calls out of __page_pool_destroy() and into page_pool_uninit(), so page_pool_uninit() is again the strict inverse of page_pool_init(). page_pool_uninit() has only two callers (the err_uninit: path and __page_pool_destroy()), so this preserves the single-call invariant on the success path while fixing the err path. The error path of page_pool_init() itself still skips the mp_ops cleanup correctly: mp_ops->init is the last action that takes a reference before page_pool_init() returns 0, so when it returns an error neither the refcount nor the static branch has been touched. Triggering the bug requires xa_alloc_cyclic() to fail with -ENOMEM, which under normal GFP_KERNEL retry behaviour is rare. It is deterministic under CONFIG_FAULT_INJECTION with fail_page_alloc / xa fault injection, or under sustained memory pressure. The leak is silent: there is no warning, and the released kernel build continues running with a permanently-incremented static branch. Fixes: 0f9214046893 ("memory-provider: dmabuf devmem memory provider") Signed-off-by: Hasan Basbunar --- net/core/page_pool.c | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/net/core/page_pool.c b/net/core/page_pool.c index 877bbf7a1938..6e576dec80db 100644 --- a/net/core/page_pool.c +++ b/net/core/page_pool.c @@ -327,6 +327,11 @@ static void page_pool_uninit(struct page_pool *pool) if (!pool->system) free_percpu(pool->recycle_stats); #endif + + if (pool->mp_ops) { + pool->mp_ops->destroy(pool); + static_branch_dec(&page_pool_mem_providers); + } } /** @@ -1146,11 +1151,6 @@ static void __page_pool_destroy(struct page_pool *pool) page_pool_unlist(pool); page_pool_uninit(pool); - if (pool->mp_ops) { - pool->mp_ops->destroy(pool); - static_branch_dec(&page_pool_mem_providers); - } - kfree(pool); } -- 2.53.0