From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from out-182.mta0.migadu.com (out-182.mta0.migadu.com [91.218.175.182]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B774C41323B for ; Tue, 31 Mar 2026 15:32:46 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=91.218.175.182 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774971177; cv=none; b=cNIs0FagPNAnxyDbrHNQPnfaWZYpIGmZdgxNovI6Xc7qdH5EmmLOywn2laCIdpDQj3upmsU6X/d1IHU9Vo8xNW+a004bhB/9AtbCAH1lQv3N/FGlEFwq9zJzznrfym9XjoN4NBwNMsIfRgqK8VrStnnWhJ2fLrYPQQrdIcVLIGI= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774971177; c=relaxed/simple; bh=y3fTQJT1tsbjISezoUHGQk1DRsrJs2b1clxEm1dfFs0=; h=Date:From:To:Cc:Subject:Message-ID:References:MIME-Version: Content-Type:Content-Disposition:In-Reply-To; b=Ppn6BD1AxOrGlppNnv+t+7AXDLs+IOb48yc369D2eTrmgq2pPa1yVgG/8m5sNI6PSYcTPELEL8K2KLVeaagqzVKiQnByOeO3ni1FezlSm9CVmbxlenGCFCMmAXm8J0E2zIvm9eBtywvdpwYgncwuTndRGE1ToVS7h74fQ2qg0Ds= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.dev; spf=pass smtp.mailfrom=linux.dev; dkim=pass (1024-bit key) header.d=linux.dev header.i=@linux.dev header.b=sKZGMqC/; arc=none smtp.client-ip=91.218.175.182 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.dev Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.dev Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linux.dev header.i=@linux.dev header.b="sKZGMqC/" Date: Tue, 31 Mar 2026 08:32:30 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1774971158; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=t+C4y+Sd7HEYKiCpd074wxzMXELc1k81DcNxRvKzgNU=; b=sKZGMqC/Uvyj61yMCKaL651i5TTogdGlkkt7N3+McpC8oWHRAfvawOHR9ZUzGkxR1tcS4b zPGenokao5vS5odsctcupTepRR8NMQrqSRcQdL7U7DQrRjHC/g3fLxaJ5qk0MrOConMCNF /HnxTf+ZV1Z2k7OhTO/DDr7ARMYddRM= X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. From: Shakeel Butt To: Hui Zhu Cc: Johannes Weiner , Michal Hocko , Roman Gushchin , Muchun Song , Andrew Morton , cgroups@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Hui Zhu Subject: Re: [PATCH mm-stable v3] mm/memcontrol: batch memcg charging in __memcg_slab_post_alloc_hook Message-ID: References: <20260331091707.226786-1-hui.zhu@linux.dev> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20260331091707.226786-1-hui.zhu@linux.dev> X-Migadu-Flow: FLOW_OUT On Tue, Mar 31, 2026 at 05:17:07PM +0800, Hui Zhu wrote: > From: Hui Zhu > > When kmem_cache_alloc_bulk() allocates multiple objects, the post-alloc > hook __memcg_slab_post_alloc_hook() previously charged memcg one object > at a time, even though consecutive objects may reside on slabs backed by > the same pgdat node. > > Batch the memcg charging by scanning ahead from the current position to > find a contiguous run of objects whose slabs share the same pgdat, then > issue a single __obj_cgroup_charge() / __consume_obj_stock() call for > the entire run. The per-object obj_ext assignment loop is preserved as-is > since it cannot be further collapsed. > > This implements the TODO comment left in commit bc730030f956 ("memcg: > combine slab obj stock charging and accounting"). > > The existing error-recovery contract is unchanged: if size == 1 then > memcg_alloc_abort_single() will free the sole object, and for larger > bulk allocations kmem_cache_free_bulk() will uncharge any objects that > were already charged before the failure. > > Benchmark using kmem_cache_alloc_bulk() with SLAB_ACCOUNT > (iters=100000): > > bulk=32 before: 215 ns/object after: 174 ns/object (-19%) > bulk=1 before: 344 ns/object after: 335 ns/object ( ~) > > No measurable regression for bulk=1, as expected. > > Signed-off-by: Hui Zhu Do we have an actual user of kmem_cache_alloc_bulk(GFP_ACCOUNT) in kernel? If yes, can you please benchmark that usage? Otherwise can we please wait for an actual user before adding more complexity? Or you can look for opportunities for kmem_cache_alloc_bulk(GFP_ACCOUNT) users and add the optimization along with the user. Have you looked at the bulk free side? I think we already have rcu freeing in bulk as a user. Did you find any opportunities in optimizing the __memcg_slab_free_hook() from bulk free?