From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.5 required=3.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED,DKIM_INVALID,DKIM_SIGNED,FREEMAIL_FORGED_FROMDOMAIN, FREEMAIL_FROM,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B9C9EC433DB for ; Tue, 9 Feb 2021 17:53:36 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 40ABD64EC4 for ; Tue, 9 Feb 2021 17:53:36 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 40ABD64EC4 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 8C14F6B0005; Tue, 9 Feb 2021 12:53:35 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 8708E6B006C; Tue, 9 Feb 2021 12:53:35 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 760C76B006E; Tue, 9 Feb 2021 12:53:35 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0210.hostedemail.com [216.40.44.210]) by kanga.kvack.org (Postfix) with ESMTP id 5D03E6B0005 for ; Tue, 9 Feb 2021 12:53:35 -0500 (EST) Received: from smtpin09.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 21EF618041E74 for ; Tue, 9 Feb 2021 17:53:35 +0000 (UTC) X-FDA: 77799476790.09.start78_1a0a2f827609 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin09.hostedemail.com (Postfix) with ESMTP id 07C451809B373 for ; Tue, 9 Feb 2021 17:53:35 +0000 (UTC) X-HE-Tag: start78_1a0a2f827609 X-Filterd-Recvd-Size: 6042 Received: from mail-pg1-f177.google.com (mail-pg1-f177.google.com [209.85.215.177]) by imf22.hostedemail.com (Postfix) with ESMTP for ; Tue, 9 Feb 2021 17:53:34 +0000 (UTC) Received: by mail-pg1-f177.google.com with SMTP id b21so12930411pgk.7 for ; Tue, 09 Feb 2021 09:53:34 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=kgP7rVV9XOms9KmN6ZlTZi1sz4j8/Qcyt5+yMhudraQ=; b=txhUUvGbr6Et8P+qJstH1wN52HTIP3XvuXU6oQ/9/6VheYliy+7dIEw6ZrrxZnE3W5 8+KfztaNaJd32HRpr3VEsVNJVwTusKrTFLnbcTBWaDE6/cIbVDXhU5ipcpYUHcOzM+yt k9Z0nErlYMG7CPMHK3Pi0lZMM1vv5WTNH6iwzY/toOZvVUlk+Dekpk5VKNNFA/b5Wx3T 5Sr97cG005c+mFUoa9ajLmq+nQAGDNLqnOCjho1uL+LIwdur9WgZGXXwRDsqFnNGtGup KBcut14qm2csTIyHLQaxIz6xanoMmyaX3RZ+jrqmzP612y7tdyguUMidEMT8XzWW2uiu fqVg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=kgP7rVV9XOms9KmN6ZlTZi1sz4j8/Qcyt5+yMhudraQ=; b=LkdNloQj492GpHnv8dToORU99yKywwnMcRjoIU8g9aFKEV65VX/6Fb+q5LJjh31WV5 99q31/BEcFkIJ5hSHt4fegMykzy8r9UmfyGidgsBL63gog2RYGNakbb/nFpKQWURJOMl MzUNTlVppGsx3BJMTIaf5hKDtKu0LUUfaCcSzetlnMj4Cp0k5do4jMwz1GfZ4f925syx tFanBTBiTYcp3W26yx7zcjlC+S1JOuK5YlITBLA39cGNggJc6y577bPckhyLA9Z2LAtC WKFJWOZ/LO1hchz5BCAjJmgXY52AljUGcbVca10lsdxe8ghyGsUKuAfFEW8zH2n5Eah1 DnWA== X-Gm-Message-State: AOAM5338/QUrp07atfMDxgO1D7hE2PDgHp3ukFXGBoPdQNj716R6ZfKu S22KxTj8RoCQH8ns+rlp/EcRbZv5rrKINA== X-Google-Smtp-Source: ABdhPJxSxqPLVfiBze/4jfixH2sN5dj1At9bov1ap2/6OLfH0or2smCvE1k6LYh2/CXBR93M629enA== X-Received: by 2002:a65:6384:: with SMTP id h4mr22820090pgv.76.1612892852199; Tue, 09 Feb 2021 09:47:32 -0800 (PST) Received: from localhost.localdomain (c-73-93-239-127.hsd1.ca.comcast.net. [73.93.239.127]) by smtp.gmail.com with ESMTPSA id j1sm22260929pfr.78.2021.02.09.09.47.29 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 09 Feb 2021 09:47:31 -0800 (PST) From: Yang Shi To: guro@fb.com, ktkhai@virtuozzo.com, vbabka@suse.cz, shakeelb@google.com, david@fromorbit.com, hannes@cmpxchg.org, mhocko@suse.com, akpm@linux-foundation.org Cc: shy828301@gmail.com, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [v7 PATCH 12/12] mm: vmscan: shrink deferred objects proportional to priority Date: Tue, 9 Feb 2021 09:46:46 -0800 Message-Id: <20210209174646.1310591-13-shy828301@gmail.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20210209174646.1310591-1-shy828301@gmail.com> References: <20210209174646.1310591-1-shy828301@gmail.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The number of deferred objects might get windup to an absurd number, and = it results in clamp of slab objects. It is undesirable for sustaining worki= ngset. So shrink deferred objects proportional to priority and cap nr_deferred t= o twice of cache items. The idea is borrowed from Dave Chinner's patch: https://lore.kernel.org/linux-xfs/20191031234618.15403-13-david@fromorbit= .com/ Tested with kernel build and vfs metadata heavy workload in our productio= n environment, no regression is spotted so far. Signed-off-by: Yang Shi --- mm/vmscan.c | 40 +++++----------------------------------- 1 file changed, 5 insertions(+), 35 deletions(-) diff --git a/mm/vmscan.c b/mm/vmscan.c index 66163082cc6f..d670b119d6bd 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -654,7 +654,6 @@ static unsigned long do_shrink_slab(struct shrink_con= trol *shrinkctl, */ nr =3D count_nr_deferred(shrinker, shrinkctl); =20 - total_scan =3D nr; if (shrinker->seeks) { delta =3D freeable >> priority; delta *=3D 4; @@ -668,37 +667,9 @@ static unsigned long do_shrink_slab(struct shrink_co= ntrol *shrinkctl, delta =3D freeable / 2; } =20 + total_scan =3D nr >> priority; total_scan +=3D delta; - if (total_scan < 0) { - pr_err("shrink_slab: %pS negative objects to delete nr=3D%ld\n", - shrinker->scan_objects, total_scan); - total_scan =3D freeable; - next_deferred =3D nr; - } else - next_deferred =3D total_scan; - - /* - * We need to avoid excessive windup on filesystem shrinkers - * due to large numbers of GFP_NOFS allocations causing the - * shrinkers to return -1 all the time. This results in a large - * nr being built up so when a shrink that can do some work - * comes along it empties the entire cache due to nr >>> - * freeable. This is bad for sustaining a working set in - * memory. - * - * Hence only allow the shrinker to scan the entire cache when - * a large delta change is calculated directly. - */ - if (delta < freeable / 4) - total_scan =3D min(total_scan, freeable / 2); - - /* - * Avoid risking looping forever due to too large nr value: - * never try to free more than twice the estimate number of - * freeable entries. - */ - if (total_scan > freeable * 2) - total_scan =3D freeable * 2; + total_scan =3D min(total_scan, (2 * freeable)); =20 trace_mm_shrink_slab_start(shrinker, shrinkctl, nr, freeable, delta, total_scan, priority); @@ -737,10 +708,9 @@ static unsigned long do_shrink_slab(struct shrink_co= ntrol *shrinkctl, cond_resched(); } =20 - if (next_deferred >=3D scanned) - next_deferred -=3D scanned; - else - next_deferred =3D 0; + next_deferred =3D max_t(long, (nr - scanned), 0) + total_scan; + next_deferred =3D min(next_deferred, (2 * freeable)); + /* * move the unused scan count back into the shrinker in a * manner that handles concurrent updates. --=20 2.26.2