From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-3.8 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 41E41C433E0 for ; Fri, 5 Mar 2021 20:26:18 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 85C5164F09 for ; Fri, 5 Mar 2021 20:26:17 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 85C5164F09 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id DD3D96B005D; Fri, 5 Mar 2021 15:26:16 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id D83826B006C; Fri, 5 Mar 2021 15:26:16 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id BFCA66B006E; Fri, 5 Mar 2021 15:26:16 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0001.hostedemail.com [216.40.44.1]) by kanga.kvack.org (Postfix) with ESMTP id 9F0C36B005D for ; Fri, 5 Mar 2021 15:26:16 -0500 (EST) Received: from smtpin29.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 3E969180ACC54 for ; Fri, 5 Mar 2021 20:26:16 +0000 (UTC) X-FDA: 77886952752.29.7BAD0E1 Received: from mail-pl1-f173.google.com (mail-pl1-f173.google.com [209.85.214.173]) by imf19.hostedemail.com (Postfix) with ESMTP id 2B1A290009E9 for ; Fri, 5 Mar 2021 20:26:15 +0000 (UTC) Received: by mail-pl1-f173.google.com with SMTP id j6so1987292plx.6 for ; Fri, 05 Mar 2021 12:26:15 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=sender:date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to; bh=dNGmj86mFeRkOoBt/AbK+6NZzyM9+XeKwiG9SmwabaA=; b=NpHTAtpodPt1vaJ1SvWGGj0+4cLHTqpaSNcKeA967cncTMTZmCx3+44vMoJBvQjy3D ZL2781KzRviiwFPwpiCrq9CgXSdV1Kmgs7W0mxQT2dKVViHJ9MxOZEJT7SNjCPt1Mdfd Mj9Tg8LxeDSoHBGwWP5wcKGObXFhq+Twj1qfXfHIGsCHjWC0txfLpC5ZgXt73sDpaEKk n1uiTZAL6S5LLJKQNflmxCsWKOZEfS7tVuE2iH39aaXsTr2bjdwKSxMsMGrmxCdettsE QzSmVvX69Bl+++C9f6DR19P4x59eNangZgUw2HQjKxEZ03CXM3gv+qTZkp6PF+Npm/l8 ERyQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:from:to:cc:subject:message-id :references:mime-version:content-disposition:in-reply-to; bh=dNGmj86mFeRkOoBt/AbK+6NZzyM9+XeKwiG9SmwabaA=; b=X1rTsRQ0PS3Kd4FINi8MXA4GDsM16Js7K6qlHKgJvnnLSWUoSGQr8hpYcR9k3tDy6A jBwJ6nUKDQE6hPnMaW390z1Kd5mb0xV8mlGjRc2sRfwR527V8qVlOx4fyMMYO3JGkYu3 T+KeEzT7PJQ/1QfL4pf6K5FnGIpM1oB0kBD4FXdR2w1b4ztQfFP7JxO9q3fHn09ie75D A4PkrYE/l8fJcwjQ6hz+FDVvbxtqa8dq/EE0N6ClL3wMtpU7hDYlB2SfbcZ/Sg9HFH8/ FN3DrC+z0E3WXnGkTcVJQjS15JRhb7kFsD0H601OVydYfu/fAmqm2EjYPSXg7VATZDEh 99AA== X-Gm-Message-State: AOAM531veVa86mkltfjJBlZDWjaLf19UsmKQ58h0UY6yDFB27xLvNyaB Nxc2zRGpj0LYJ7UFd/EmZDU= X-Google-Smtp-Source: ABdhPJzqUgSGCScxhDoo2nN4iiTLIVlk230TnxX0TjAWahTL3fkMweSC0QyTPYeIzrPozM5e/oARJQ== X-Received: by 2002:a17:90a:3ec3:: with SMTP id k61mr11772694pjc.125.1614975974792; Fri, 05 Mar 2021 12:26:14 -0800 (PST) Received: from google.com ([2620:15c:211:201:9db6:fc32:1046:dd86]) by smtp.gmail.com with ESMTPSA id s28sm3269115pfd.155.2021.03.05.12.26.13 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 05 Mar 2021 12:26:13 -0800 (PST) Date: Fri, 5 Mar 2021 12:26:11 -0800 From: Minchan Kim To: Michal Hocko Cc: Andrew Morton , linux-mm , LKML , joaodias@google.com, surenb@google.com, cgoldswo@codeaurora.org, willy@infradead.org, david@redhat.com, vbabka@suse.cz, linux-fsdevel@vger.kernel.org Subject: Re: [PATCH 1/2] mm: disable LRU pagevec during the migration temporarily Message-ID: References: <20210302210949.2440120-1-minchan@kernel.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: X-Stat-Signature: 757xejrz57uwhkkhydwq7r5u7ijdj9b8 X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: 2B1A290009E9 Received-SPF: none (gmail.com>: No applicable sender policy available) receiver=imf19; identity=mailfrom; envelope-from=""; helo=mail-pl1-f173.google.com; client-ip=209.85.214.173 X-HE-DKIM-Result: pass/pass X-HE-Tag: 1614975975-437733 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Fri, Mar 05, 2021 at 05:06:17PM +0100, Michal Hocko wrote: > On Wed 03-03-21 12:23:22, Minchan Kim wrote: > > On Wed, Mar 03, 2021 at 01:49:36PM +0100, Michal Hocko wrote: > > > On Tue 02-03-21 13:09:48, Minchan Kim wrote: > > > > LRU pagevec holds refcount of pages until the pagevec are drained. > > > > It could prevent migration since the refcount of the page is greater > > > > than the expection in migration logic. To mitigate the issue, > > > > callers of migrate_pages drains LRU pagevec via migrate_prep or > > > > lru_add_drain_all before migrate_pages call. > > > > > > > > However, it's not enough because pages coming into pagevec after the > > > > draining call still could stay at the pagevec so it could keep > > > > preventing page migration. Since some callers of migrate_pages have > > > > retrial logic with LRU draining, the page would migrate at next trail > > > > but it is still fragile in that it doesn't close the fundamental race > > > > between upcoming LRU pages into pagvec and migration so the migration > > > > failure could cause contiguous memory allocation failure in the end. > > > > > > > > To close the race, this patch disables lru caches(i.e, pagevec) > > > > during ongoing migration until migrate is done. > > > > > > > > Since it's really hard to reproduce, I measured how many times > > > > migrate_pages retried with force mode below debug code. > > > > > > > > int migrate_pages(struct list_head *from, new_page_t get_new_page, > > > > .. > > > > .. > > > > > > > > if (rc && reason == MR_CONTIG_RANGE && pass > 2) { > > > > printk(KERN_ERR, "pfn 0x%lx reason %d\n", page_to_pfn(page), rc); > > > > dump_page(page, "fail to migrate"); > > > > } > > > > > > > > The test was repeating android apps launching with cma allocation > > > > in background every five seconds. Total cma allocation count was > > > > about 500 during the testing. With this patch, the dump_page count > > > > was reduced from 400 to 30. > > > > > > Have you seen any improvement on the CMA allocation success rate? > > > > Unfortunately, the cma alloc failure rate with reasonable margin > > of error is really hard to reproduce under real workload. > > That's why I measured the soft metric instead of direct cma fail > > under real workload(I don't want to make some adhoc artificial > > benchmark and keep tunes system knobs until it could show > > extremly exaggerated result to convice patch effect). > > > > Please say if you belive this work is pointless unless there is > > stable data under reproducible scenario. I am happy to drop it. > > Well, I am not saying that this is pointless. In the end the resulting > change is relatively small and it provides a useful functionality for > other users (e.g. hotplug). That should be a sufficient justification. Yub, that was my impression to worth upstreaming rather than keeping downstream tree so made divergent. > > I was asking about CMA allocation success rate because that is a much > more reasonable metric than how many times something has retried because > retries can help to increase success rate and the patch doesn't really > remove those. If you want to use number of retries as a metric then the > average allocation latency would be more meaningful. I believe the allocation latency would be pretty big and retrial part would be marginal so doubt it's meaningful. Let me send next revision with as-is descripion once I fix places you pointed out. Thanks.