From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8D206C3DA7F for ; Mon, 5 Aug 2024 14:24:03 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 0E81C6B0085; Mon, 5 Aug 2024 10:24:03 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 098A16B0088; Mon, 5 Aug 2024 10:24:03 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id EA23B6B0089; Mon, 5 Aug 2024 10:24:02 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id CCA376B0085 for ; Mon, 5 Aug 2024 10:24:02 -0400 (EDT) Received: from smtpin11.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 480D91603CA for ; Mon, 5 Aug 2024 14:24:02 +0000 (UTC) X-FDA: 82418411124.11.AD9DD45 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by imf16.hostedemail.com (Postfix) with ESMTP id 6C5DC180007 for ; Mon, 5 Aug 2024 14:24:00 +0000 (UTC) Authentication-Results: imf16.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b="Zx/OyM4y"; spf=pass (imf16.hostedemail.com: domain of hawk@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=hawk@kernel.org; dmarc=pass (policy=none) header.from=kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1722867833; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=lwlnT5fX8iz2vzW+TWGeuXjhysq7ID73x2W0YOajK0I=; b=x9+y1CyEFwIXKBv5VUKYnpI6VfAjJjDNwi0BfX0vZtZfwAe8d9VuF8w3+hrdEK+xRUvkAf bUr4heM/1QAKYbjdotaMRyc4B6RAU5g0CVceH83dFnyNMWPmWisTbYkAUxobLxqUI0klwJ Ajp/hZ0CEk2RlMPw9pigEMC7inmPYpQ= ARC-Authentication-Results: i=1; imf16.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b="Zx/OyM4y"; spf=pass (imf16.hostedemail.com: domain of hawk@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=hawk@kernel.org; dmarc=pass (policy=none) header.from=kernel.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1722867833; a=rsa-sha256; cv=none; b=U2xUZapx1kUHZKg+Vak2SKRnHHZWg5W8oOpJxNcfrPdKWZWetIG8p7u6fFL0uKe83wHfqL MwnCDosih0zZ+LiJIvnhv11g7KaQ0aLfXlLiRdDFPebev9Lolt1eZ0JcXT6pD43VhkLsko qXiR9rabpEeFkcvNg7/WPnchj918xM8= Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by dfw.source.kernel.org (Postfix) with ESMTP id 3F36460C03; Mon, 5 Aug 2024 14:23:59 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id EB109C32782; Mon, 5 Aug 2024 14:23:56 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1722867838; bh=DldBq+rg0T7y4j2Mv+qFD/nrBR+rhA1J45Aia6sC++w=; h=Date:Subject:To:Cc:References:From:In-Reply-To:From; b=Zx/OyM4yoNhc6o1KVXkqOdYImG+sJ0Lzdty+/PxVLVMKXl2LiiuQvOWr7uU42cAP/ /g73BK49Rdp1NsJR1mfwyERqWPPLWKuDFJc3eycCfnxRSps1mwpxZUYs8iaOOnRAH2 L8bMYrI+IH0PwJJ85JjbrVNfbqtDngcnDUPBW/8f2niZr6vprckwJ3Wel3tQ1fsyhK aYUj+7of/RglZGOj7UxYBIC+TkUWGmS74AOL7MWNk4c9QbRf1ONLmdefXz28zq3/FL qachyM5+6YIkNaa3KEEHmVNOiQ0WdHnhbpYV8QH9GpS5qj6aMHyTKoUd4Jvw5G6k0K 4LAlgyD/1Vfug== Message-ID: Date: Mon, 5 Aug 2024 16:23:55 +0200 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH V8 1/2] cgroup/rstat: Avoid flushing if there is an ongoing overlapping flush To: Yosry Ahmed Cc: tj@kernel.org, cgroups@vger.kernel.org, shakeel.butt@linux.dev, hannes@cmpxchg.org, lizefan.x@bytedance.com, longman@redhat.com, kernel-team@cloudflare.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org References: <172139415725.3084888.13770938453137383953.stgit@firesoul> Content-Language: en-US From: Jesper Dangaard Brouer In-Reply-To: Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 8bit X-Rspam-User: X-Stat-Signature: 1q1pdbhmd1cqzy3bhycc1dhh9t48ohr6 X-Rspamd-Queue-Id: 6C5DC180007 X-Rspamd-Server: rspam11 X-HE-Tag: 1722867840-977819 X-HE-Meta: U2FsdGVkX19RcZA11biikGwsd89gbj2iLYQhgL2KayZ1ZKx8XB0wwumfs5wZnhcZkElsyJ+VqeMJOrEiFPsqWIRMf5zHH4he8rJie/dsN3pZLrOYLIukaUpyPCHKystCO5Yw8EVSGy4Y0DdR3Icbu7aPVSY8zb061q92/MzZHvOKr+wRPqTTLaCuBFfNfM3WB97+oUKABvt1UxcynrmvwVIT1MldBh4w2MF59Scw1zflRS5G9UqpfjO4RxV3j/yClr5vViBx92WF1WuWx7/obHeAgKWAh4e7GpbhdzagdNkmqZLKQybU+4HBY1v+/ieDnRbPFPyALlDfWd+cuiKAkT4BSxmOKgu6k9uqQgvQe44oAnCtjqtR9yY4Ywf6+oV6A2FlmA9EQu4U7MuOy6rfO7+JjUCA4by1ytp3BdcPBOjiTYIG1vUhs33IhKrhS4FwmzOFvuqi+Ra5ZC+LQx5fs/WIJrJLkQQ3ECrj78mzNomgBNQieyQCrj+NhNR1VPPHI8luGby/6ZlSniqcOMbjyRV6Ay824zmn9ubE/oszMcpE04wyMTbF3ARqgH24GMRuNb+mGY5PZAXKoNzfAnjY8R0XQg/NU5dt8yR/77EUVUNNv1R9yoXrz4cxfDFy9T3BP2yuNlphFO4IKp3OxmaRh2MTbyWqKTbxHHx9NFgNNz1gVlP31+2RD6G+n3s90NSNsE8yq1KUbrwoQqZtK23eEEwMAJnYqD9rYxlISjz4nDEQx0DmqiEQQTMavFAGrAnzUcBQokej0pdfx2D8N/gO7zuQlQu2p3gkIg0sLxnhWjPt74T9VaTtwp5cvZSXaBb11uw19bJoVjRaAgg3o1rB6LKq/Taj6UUNL7ggDSXWjkV1mQNSU9AeyC3k6O46gGshqLBNie6Vhk7vemZHyYl7jfoq9p0b/9h82+LvoMYdF8artPHFeqRif/87lNyKWCXXp/VnxTBAm71XRJ6A6NH jThogUXX NwQm73F9CwuhIbHV3wFte1y58E/dBCnRl91RpXkaonlZWwHrXCj9VMgScyMCIuGRHeVIQvDxVtUbEh336HW33Gup8ZdNz/ARXps0ujLKzCKbIP26XNrstA7+TcDS7FL7JguzvoNC5gYPc0cEXUhVTtmh0aMAOwuIymiihh/CODxHDY6bQsb+Y9iLIdMq0uhnV9+jGhlL5f+Vx9XLyCDMnKitdKej4Of3BAo15PNt7ValbGQL23YiCRBTGkdKWCQIvvZBgdtuz5pD0WxuRaePGkB7TPI+9kOEILhU13EMqy5EBn6fV00e8qylh3w== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000076, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On 02/08/2024 18.10, Yosry Ahmed wrote: > On Fri, Aug 2, 2024 at 4:43 AM Jesper Dangaard Brouer wrote: >> >> >> On 30/07/2024 20.54, Yosry Ahmed wrote: >>> [..] >>>> >>>> Well... I'm still not convinced that it makes sense to have level >= 2 >>>> be the ongoing flusher. >>>> >>>> E.g. if a level 2 cgroup becomes ongoing flusher, and kswapd starts 12 >>>> NUMA flushes at the same time, then the code will have these 12 kswapd >>>> threads spin on the lock, until ongoing flusher finishes. That is likely >>>> what happened above (for a level 1). These 12 spinning (root) flushers >>>> will not recheck ongoing_flusher and will all flush the root >>>> (unnecessarily 11 times). >>> >>> Hmm regardless of whether or not the level-2 cgroup becomes the >>> ongoing flusher, the kswapd threads will all spin on the lock anyway >>> since none of them can be the ongoing flusher until the level-2 cgroup >>> finishes. Right? >>> >>> Is the scenario you have in mind that the level-2 cgroup starts >>> flushing at the same time as kswapd, so there is a race on who gets to >>> be the ongoing flusher? In this case as well, whoever gets the lock >>> will be the ongoing flusher anyway. >>> >>> Not allowing whoever is holding the lock to be the ongoing flusher >>> based on level is only useful when we can have multiple ongoing >>> flushers (with lock yielding). Right? >>> >>> Perhaps I am missing something here. >>> >>>> >>>> So, I don't think it is a good idea to have anything else that the root >>>> as the ongoing flusher. >>>> >>>> Can you explain/convince me why having sub-cgroups as ongoing flusher is >>>> an advantage? >>> >>> I just don't see the benefit of the special casing here as I mentioned >>> above. If I missed something please let me know. >>> >> >> I do think you missed something. Let me try to explain this in another >> way. (I hope my frustrations doesn't shine through). >> >> The main purpose of the patch is/was to stop the thundering herd of >> kswapd thread flushing (root-cgrp) at exactly the same time, leading to >> lock contention. This happens all-the-time/constantly in production. >> >> The first versions (where ongoing was limited to root/level=0) solved >> this 100%. The patches that generalized this to be all levels can >> become ongoing flush, doesn't solve the problem any-longer! >> >> I hope it is clear what fails. E.g. When a level>0 becomes ongoing >> flusher, and 12 kswapd simultaneously does a level=0/root-cgrp flush, >> then we have 12 CPU cores spinning on the rstat lock. (These 12 kswapd >> threads will all go-through completing the flush, as they do not >> discover/recheck that ongoing flush was previously became their own level). > > I think we may be speaking past one another, let me try to clarify :) > > I agree with your assessment, all I am saying is that this restriction > is only needed because of lock yielding, and can be removed after that > IIUC. > > The problem when we allow non-root ongoing flushers now is that when > the kswapd thread are woken up and the first one of them gets the lock > and does the flush, it may be find that the ongoing_flusher is already > set by another non-root flusher that yielded the lock. In this case, > the following kswapd flushers will spin on the lock instead of waiting > for the first kswapd to finish. > > If we remove lock yielding, then the above scenario cannot happen. I think, this is where we disagree/talk-past-each-other. Looking at the code, I do believe the the situation *also* occurs without any lock yielding involved. Yes, the situation if far-worse when we have lock yielding, but it also happens in the default case. > When the lock/mutex is held by a flusher, it is guaranteed that > ongoing_flusher is NULL and can be set by the flusher. In this case, > we should allow any cgroup to be the ongoing_flusher because there can > only be one anyway. > With current patch proposal [V8 or V9]. Assuming we have no lock yielding. Do we agree that 12 kswapd threads will be waiting on the lock, when a level>0 were ongoing flusher when they were started? Then level>0 finishes being ongoing flushed. Then kswapd0 gets lock, observe NULL as ongoing, and becomes ongoing. Then kswapd1 gets lock, observe NULL as ongoing, and becomes ongoing. Then kswapd2 gets lock, observe NULL as ongoing, and becomes ongoing. Then kswapd3 gets lock, observe NULL as ongoing, and becomes ongoing. Then kswapd4 gets lock, observe NULL as ongoing, and becomes ongoing. Then kswapd5 gets lock, observe NULL as ongoing, and becomes ongoing. Then kswapd6 gets lock, observe NULL as ongoing, and becomes ongoing. [etc] Please, let me know if I misunderstood my own code, and you believe this scenario cannot happen. When above happens, then patch didn't solve the kswapd thundering herd issue that we observe in production. The point/problem is that once kswapd is waiting on the lock, then code doesn't re-check the ongoing flusher, and every kswapd thread will be spinning and every kswapd thread will need to go through the flush. When a kswapd thread gets the lock, then it will observe ongoing as NULL, so it cannot detect that another level=0 just were the ongoing. --Jesper