From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1684EC25B4E for ; Tue, 24 Jan 2023 07:07:05 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id A3DD66B0072; Tue, 24 Jan 2023 02:07:04 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 9EF426B0074; Tue, 24 Jan 2023 02:07:04 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 8B6536B0075; Tue, 24 Jan 2023 02:07:04 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 7BD1A6B0072 for ; Tue, 24 Jan 2023 02:07:04 -0500 (EST) Received: from smtpin11.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 514AE1409DB for ; Tue, 24 Jan 2023 07:07:04 +0000 (UTC) X-FDA: 80388810768.11.81C6E45 Received: from mail-pj1-f52.google.com (mail-pj1-f52.google.com [209.85.216.52]) by imf25.hostedemail.com (Postfix) with ESMTP id 80B01A001A for ; Tue, 24 Jan 2023 07:07:02 +0000 (UTC) Authentication-Results: imf25.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=m8uSZj4Z; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf25.hostedemail.com: domain of 42.hyeyoo@gmail.com designates 209.85.216.52 as permitted sender) smtp.mailfrom=42.hyeyoo@gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1674544022; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=tu8uF0sIvdJH1X4pmLZGQdlr7nT0GaxQgUrNBqH+YlI=; b=vAy/DoSxiMaZWfWbNZaDDtKmxrNDrviY5NxHHtOC8e1eMXZzloVqMZqFGZCdJM6qZcYWXO WVMKgt+3CLTZxKTA7uKcv2MZbC1fTUy1isHnAMBKSPcPr62w1l3wEuqhX4EIIVqTMjJzG8 F9/b0Roo6K/bgxZdqjvqjgnRofZs2Xs= ARC-Authentication-Results: i=1; imf25.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=m8uSZj4Z; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf25.hostedemail.com: domain of 42.hyeyoo@gmail.com designates 209.85.216.52 as permitted sender) smtp.mailfrom=42.hyeyoo@gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1674544022; a=rsa-sha256; cv=none; b=kiOepkTq0epGeg0m2s7Tn3b1l8eawDDAGnOO8N2DhSepT2aUvFhU/mdU4bTJ3Wc5VlIUbk meMNLy40MvdnN8PqqBzzZxwBVEwWlrRS2ae/ukLXIpopG625BczKTp+IuQBoTjEi+xHluI D1VmupCGVcsAs/0NQmQ6G2zQEonQCt8= Received: by mail-pj1-f52.google.com with SMTP id z9-20020a17090a468900b00226b6e7aeeaso13202958pjf.1 for ; Mon, 23 Jan 2023 23:07:02 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:from:to:cc:subject:date:message-id:reply-to; bh=tu8uF0sIvdJH1X4pmLZGQdlr7nT0GaxQgUrNBqH+YlI=; b=m8uSZj4Z6qfEwNKT93uNAsFUeTrPrrtwXWYCnJhN4xTSL1/ZYPQ7XXJ0Fz07wFHFWe v58q9BSMYaHR5VmqmRfBLRWYMPknkpwDPceZGbKzcn6vXAr2lb2Qwy2gHxOpDzhtid9n Mjvoyu6PyOPp0U65w2lZ5lPz3dwxitcQ57PX6WSqCZa6Z42HT3kArq45zpLFP8pD3hna 3WrgN5oXuMCCZ2z76hqhrqQiRa1NkPXvyBAE7QozNaRGna4k6VrPZgUJbYRROF+Zuhyy MzyFfvdY2yFtl5x06NGBZNpYbwFzGhfKVaZRgDZgOJyvChAYQGj5ckD1saRtRa+sqFeL iuJQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:x-gm-message-state:from:to:cc:subject:date :message-id:reply-to; bh=tu8uF0sIvdJH1X4pmLZGQdlr7nT0GaxQgUrNBqH+YlI=; b=xP3UdXLvC//h2eNa3JilvG+1Kyzj6yxepyPHkxMx8cJN4TGj0UwaRhpC8qIsW4E2Sg B1cXwf4LTShmiZE5SBGMVI13B8aBde8WaUrkem3dyGa+8mwvqqe9Vo/y2HrlzwvtP4c0 bcuCnGtL+uE5aNKVwy17Ip1BWObkqawyx9o72oGWicWvhfVMb4TOI5kVWpThXuB1DjF2 P9dfzP/x5TDcbLRoXVfzrQuayhP90nY5Z6ydgmc30qZJRhMru/k5Z/b6ZXUtW/3J5Is8 Yxe4/zIUzR91LpNquRC705kM5mJS0k2efoWg6DJOeFyTe4BxbLwOM05L09S/p2sIxnWt +v7Q== X-Gm-Message-State: AFqh2kptV8FNnKNQ3w+jC58mY2N0WjfxTfem1PpYWjI0nz2Mf1wkn8SX dMxuKyfggzJbeJsiBVkLX9g= X-Google-Smtp-Source: AMrXdXuEnsI1kKsqdbIIs99GgMTyTbgnlqHBOUA03awggSDmwRrwnwQlwOXlC1nw6JLjLi5h/GWheA== X-Received: by 2002:a17:903:40ce:b0:194:d09b:7986 with SMTP id t14-20020a17090340ce00b00194d09b7986mr18863227pld.23.1674544021181; Mon, 23 Jan 2023 23:07:01 -0800 (PST) Received: from hyeyoo ([114.29.91.56]) by smtp.gmail.com with ESMTPSA id a26-20020aa794ba000000b00580cc63dce8sm776835pfl.77.2023.01.23.23.06.57 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 23 Jan 2023 23:07:00 -0800 (PST) Date: Wed, 25 Jan 2023 01:06:45 +0900 From: Hyeonggon Yoo <42.hyeyoo@gmail.com> To: Christoph Lameter Cc: Jesper Dangaard Brouer , netdev@vger.kernel.org, linux-mm@kvack.org, Andrew Morton , Mel Gorman , Joonsoo Kim , penberg@kernel.org, vbabka@suse.cz, Jakub Kicinski , "David S. Miller" , edumazet@google.com, pabeni@redhat.com Subject: Re: [PATCH RFC] mm+net: allow to set kmem_cache create flag for SLAB_NEVER_MERGE Message-ID: References: <167396280045.539803.7540459812377220500.stgit@firesoul> <36f5761f-d4d9-4ec9-a64-7a6c6c8b956f@gentwo.de> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <36f5761f-d4d9-4ec9-a64-7a6c6c8b956f@gentwo.de> X-Rspam-User: X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: 80B01A001A X-Stat-Signature: tp6ak9g8odj3fbgpeosk7g948k95dg97 X-HE-Tag: 1674544022-694184 X-HE-Meta: U2FsdGVkX18YtQu8ltc8AJFZXMin2g0BJpz9vGjB0ZyfVbvHRZQy7G4YYJ2vWNYjf6WsmyntE8lNK/YHfFRsykGOByWfm6So7FHOTLrC7PsanYyqVrXRYQBDfIlEw6zxurRLgQFwnFhR5Nkf2mBuOs+LMHIVBM/5lKfhwFLE6xqnT6V9LPjPJxpw6lBzEPywt1IKRp9wMMtpc8t1brbbtJdmvg+eoyZrncJeg8cPC4HtvKj9cddm5g7MMioct2q4p6yUg7UeqW1IhD9scEPmWPuJ8RWjTBwCEzuG3zYH8JlSOE6phBiOVb7TV8oh8wRld6vkmvqp4olETm0gnjV6dTxiZ0nxp3P14e+K2IBe8Pd2Ah7nQZ8Y9ofMFsAjtSud1Idnl9X5wJP5bpa3QVH/B1YFA0cZh2lu1QDEFu0GGYdhaiD+3/7wWa9YLRwhRMlADZ+jKkAPqZriUq1py+XEjDugYPh6R53/iuonW+lGVHKljVlMLT07/MiXHjJihm5PQh1N+phOoB1k/R3xntPli/7jKS6bsXLAI3SrhhPCF3S7HjudMnAfaxEKkIrwHfyCSpMj1w7haY6dcMfneKctBQKOcJDAWaF7e65E2iaE61rz9V198HRJy+UYyHxMAODueJhXVG+eCoBxVUXWIVnQS04KSmhPTtpl+inTvgKRCqeznNppmexYe2ZrOZZLTx+X40KYpEJOlrVnfXNpixAdBM38RiubPxvzhMnte2Mno5Hj1TGfBzNCzRfgfVgtitU19G+Ba0lG0BWGUiXb+qnDRAETU8Stcer2o009TarGbZ0pO7+17LOgIdqB1ZfAbaifRwXewc8OFEGh9Z77Il/boGMevigYu+mSNhyV9CC+9jyCygP6JQ6i3BLoEIh49AaIBoR80Zm6oDWbhjVNaeYlcNZ9Dr3mXvgO4bpYhor/fwrErfQCCln6u5mDAJj2uAEilFZmLxcMXKIIEFntprB cf7JcUve UJGVvcB1rx2/XGfKxVm5aQ1jh4Omb9s8sdF68y+kj55v+1CR4ba7Mp4rivvrTvuWMYa4gzZXKQ3TAFBuke4PXWTFSlfKn3eIkszvLBlUjt8C8k7GpiGUiHbP4fUGQzqgr1/ZbYYfh9mkgCbiV61npe2kwju73JmWObXySJnVzyJ4NzYIktFuHYT5g6InWazzNSVCUSiQ8rUUYi8xarElkNRRuqS7sU6c1KJ6LVjs7AnJiXyexHQa8j1rn/rQLVK7Y+sI2xvzmkfQdinuvsmrdgIXLw82y9DVT3/+EXJp1at7abcicQqj+/Viuqf5JdFzigCu8jXl9Ca/NxTmQOHIHB9LqYUO7Wj+4n93mgOdZG0ixQuQmfGgQthRiwkLXdjjw6KbQUJ5aLIxXtEjeZAZm3PiJIGJTpHERRzQocsxo0AawGEZxkYt/V8dAIsZQdNcryDVHKXeptwnThUnDUnYzx8cT2blqRH3+A55LzHuBxtPSW6QPw3v3of5YFxLa2wj2jMCyWbbIddPI9ccXrce0DDfGWQ== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Tue, Jan 17, 2023 at 03:54:34PM +0100, Christoph Lameter wrote: > On Tue, 17 Jan 2023, Jesper Dangaard Brouer wrote: > > > When running different network performance microbenchmarks, I started > > to notice that performance was reduced (slightly) when machines had > > longer uptimes. I believe the cause was 'skbuff_head_cache' got > > aliased/merged into the general slub for 256 bytes sized objects (with > > my kernel config, without CONFIG_HARDENED_USERCOPY). > > Well that is a common effect that we see in multiple subsystems. This is > due to general memory fragmentation. Depending on the prior load the > performance could actually be better after some runtime if the caches are > populated avoiding the page allocator etc. > > The merging could actually be beneficial since there may be more partial > slabs to allocate from and thus avoiding expensive calls to the page > allocator. > > I wish we had some effective way of memory defragmentation. If general memory fragmentation is actual cause of this problem, it may be worsening by [1] due to assumption that all allocations are done in the same order as s->oo, when accounting and limiting the number of percpu slabs. [1] https://lore.kernel.org/linux-mm/76c63237-c489-b942-bdd9-5720042f52a9@suse.cz