From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C9D2EC32772 for ; Fri, 19 Aug 2022 15:46:02 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1349753AbiHSPqB (ORCPT ); Fri, 19 Aug 2022 11:46:01 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38596 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1350265AbiHSPpo (ORCPT ); Fri, 19 Aug 2022 11:45:44 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 55EB5103637 for ; Fri, 19 Aug 2022 08:45:11 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=In-Reply-To:Content-Type:MIME-Version: References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=tw8kEMFFRbBUwts1QbL1tM2Tys2mNEpVsw+0+Hm+CxY=; b=RDtZCUrmOidmw/y4ynTuLO+0Sn Zv48f5Lcj2dW89zvMD3NRf861BK1XCYeE/gbbLaXBE/4JWgE0k0ghvOWDmD6P2R8eDwELBO08QTZR FC6iaja7ELLoNxLT3JMrlK3ZiN4gxblk8oUUGqD9hedu5BnQ7f2IC/qSdk0IOgfAYy1wVwhUeTNvr cyIXoGI+vKu3QJ2bh0tF8oIWRiu4hFv6KJyKB4IlsCLUfv5NTTFLrYYH7WKB11kLjVpmFgUPBGOtU NR2JoHv6MiouKLByKQDqzDq+2rhLapEpxlpptzXHMkbV6k9a1TFB8sHlQv8lrDyD7n7PtOwPZdyHj nMW1BeBA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1oP4BF-00BJ1L-GL; Fri, 19 Aug 2022 15:45:05 +0000 Date: Fri, 19 Aug 2022 16:45:05 +0100 From: Matthew Wilcox To: Uladzislau Rezki Cc: Thomas Gleixner , Ira Weiny , "Fabio M. De Francesco" , Luis Chamberlain , linux-fsdevel@vger.kernel.org, linux-mm@kvack.org Subject: Re: [RFC] vmap_folio() Message-ID: References: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org On Fri, Aug 19, 2022 at 12:53:32PM +0200, Uladzislau Rezki wrote: > Looks pretty straightforward. One thing though, if we can combine it > together with vmap(), since it is a copy paste in some sense, say to > have something __vmap() to reuse it in the vmap_folio() and vmap(). > > But that is just a thought. Thanks for looking it over! Combining it with vmap() or vm_map_ram() is tricky. Today, we assume that each struct page pointer refers to exactly PAGE_SIZE bytes, so if somebody calls alloc_pages(GFP_COMPOUND, 4) and then passes the head page to vmap(), only that one page gets mapped. I don't know whether any current callers depend on that behaviour. Now that I look at the future customers of this, I think I erred in basing this on vmap(), it looks like vm_map_ram() is preferred. So I'll redo based on vm_map_ram().