From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_SANE_1 autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2DECAC433B4 for ; Wed, 19 May 2021 15:56:36 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id A7CCB60FF3 for ; Wed, 19 May 2021 15:56:35 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org A7CCB60FF3 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=suse.de Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 115F16B0036; Wed, 19 May 2021 11:56:35 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 09E426B006E; Wed, 19 May 2021 11:56:35 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E32BA6B0070; Wed, 19 May 2021 11:56:34 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0251.hostedemail.com [216.40.44.251]) by kanga.kvack.org (Postfix) with ESMTP id AAC396B0036 for ; Wed, 19 May 2021 11:56:34 -0400 (EDT) Received: from smtpin32.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 3A62F9425 for ; Wed, 19 May 2021 15:56:34 +0000 (UTC) X-FDA: 78158433108.32.26E1252 Received: from mx2.suse.de (mx2.suse.de [195.135.220.15]) by imf28.hostedemail.com (Postfix) with ESMTP id A2ABB20007CC for ; Wed, 19 May 2021 15:56:32 +0000 (UTC) X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id A3F46ABB1; Wed, 19 May 2021 15:56:32 +0000 (UTC) Date: Wed, 19 May 2021 16:56:30 +0100 From: Mel Gorman To: Uladzislau Rezki Cc: Christoph Hellwig , Andrew Morton , linux-mm@kvack.org, LKML , Matthew Wilcox , Nicholas Piggin , Hillf Danton , Michal Hocko , Oleksiy Avramchenko , Steven Rostedt Subject: Re: [PATCH 2/3] mm/vmalloc: Switch to bulk allocator in __vmalloc_area_node() Message-ID: <20210519155630.GD3672@suse.de> References: <20210516202056.2120-1-urezki@gmail.com> <20210516202056.2120-3-urezki@gmail.com> <20210519143900.GA2262@pc638.lan> MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-15 Content-Disposition: inline In-Reply-To: <20210519143900.GA2262@pc638.lan> User-Agent: Mutt/1.10.1 (2018-07-13) Authentication-Results: imf28.hostedemail.com; dkim=none; dmarc=none; spf=pass (imf28.hostedemail.com: domain of mgorman@suse.de designates 195.135.220.15 as permitted sender) smtp.mailfrom=mgorman@suse.de X-Rspamd-Server: rspam01 X-Rspamd-Queue-Id: A2ABB20007CC X-Stat-Signature: m3p4nf7hyz879idw4ocgpbrcfymxex5w X-HE-Tag: 1621439792-823813 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Wed, May 19, 2021 at 04:39:00PM +0200, Uladzislau Rezki wrote: > > > + /* > > > + * If not enough pages were obtained to accomplish an > > > + * allocation request, free them via __vfree() if any. > > > + */ > > > + if (area->nr_pages != nr_small_pages) { > > > + warn_alloc(gfp_mask, NULL, > > > + "vmalloc size %lu allocation failure: " > > > + "page order %u allocation failed", > > > + area->nr_pages * PAGE_SIZE, page_order); > > > + goto fail; > > > + } > > > > From reading __alloc_pages_bulk not allocating all pages is something > > that cn happen fairly easily. Shouldn't we try to allocate the missing > > pages manually and/ore retry here? > > > > It is a good point. The bulk-allocator, as i see, only tries to access > to pcp-list and falls-back to a single allocator once it fails, so the > array may not be fully populated. > Partially correct. It does allocate via the pcp-list but the pcp-list will be refilled if it's empty so if the bulk allocator returns fewer pages than requested, it may be due to hitting watermarks or the local zone is depleted. It does not take any special action to correct the situation or stall e.g. wake kswapd, enter direct reclaim, allocate from a remote node etc. If no pages were allocated, it'll try allocate at least one page via a single allocation request in case the bulk failure would push the zone over the watermark but 1 page does not. That path as a side-effect would also wake kswapd. > In that case probably it makes sense to manually populate it using > single page allocator. > > Mel, could you please also comment on it? > It is by design because it's unknown if callers can recover or if so, how they want to recover and the primary intent behind the bulk allocator was speed. In the case of network, it only wants some pages quickly so as long as it gets 1, it makes progress. For the sunrpc user, it's willing to wait and retry. For vmalloc, I'm unsure what a suitable recovery path should be as I do not have a good handle on workloads that are sensitive to vmalloc performance. The obvious option would be to loop and allocate single pages with alloc_pages_node understanding that the additional pages may take longer to allocate. An alternative option would be to define either __GFP_RETRY_MAYFAIL or __GFP_NORETRY semantics for the bulk allocator to handle it in the failure path. It's a bit more complex because the existing __GFP_RETRY_MAYFAIL semantics deal with costly high-order allocations. __GFP_NORETRY would be slightly trickier although it makes more sense. The failure path would retry the failure path unless __GFP_NORETRY was specified. For that option, the network path would need to be updated to add the __GFP_NORETRY flag as it almost certainly does not want looping behaviour. -- Mel Gorman SUSE Labs