From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 47CBFC77B75 for ; Wed, 17 May 2023 09:41:40 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230095AbjEQJlj (ORCPT ); Wed, 17 May 2023 05:41:39 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45684 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230151AbjEQJld (ORCPT ); Wed, 17 May 2023 05:41:33 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id F14EC420A; Wed, 17 May 2023 02:41:28 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 801AA6445D; Wed, 17 May 2023 09:41:28 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id BC390C433EF; Wed, 17 May 2023 09:41:22 +0000 (UTC) Date: Wed, 17 May 2023 10:41:19 +0100 From: Catalin Marinas To: Christoph Hellwig Cc: Petr =?utf-8?B?VGVzYcWZw61r?= , "Michael Kelley (LINUX)" , Petr Tesarik , Jonathan Corbet , Greg Kroah-Hartman , "Rafael J. Wysocki" , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , David Airlie , Daniel Vetter , Marek Szyprowski , Robin Murphy , "Paul E. McKenney" , Borislav Petkov , Randy Dunlap , Damien Le Moal , Kim Phillips , "Steven Rostedt (Google)" , Andy Shevchenko , Hans de Goede , Jason Gunthorpe , Kees Cook , Thomas Gleixner , "open list:DOCUMENTATION" , open list , "open list:DRM DRIVERS" , "open list:DMA MAPPING HELPERS" , Roberto Sassu , Kefeng Wang Subject: Re: [PATCH v2 RESEND 4/7] swiotlb: Dynamically allocated bounce buffers Message-ID: References: <346abecdb13b565820c414ecf3267275577dbbf3.1683623618.git.petr.tesarik.ext@huawei.com> <20230516061309.GA7219@lst.de> <20230516083942.0303b5fb@meshulam.tesarici.cz> <20230517083510.0cd7fa1a@meshulam.tesarici.cz> <20230517065653.GA25016@lst.de> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20230517065653.GA25016@lst.de> Precedence: bulk List-ID: X-Mailing-List: linux-doc@vger.kernel.org On Wed, May 17, 2023 at 08:56:53AM +0200, Christoph Hellwig wrote: > Just thinking out loud: > > - what if we always way overallocate the swiotlb buffer > - and then mark the second half / two thirds / of the thin air> slots as used, and make that region available > through a special CMA mechanism as ZONE_MOVABLE (but not allowing > other CMA allocations to dip into it). > > This allows us to have a single slot management for the entire > area, but allow reclaiming from it. We'd probably also need to make > this CMA variant irq safe. I think this could work. It doesn't need to be ZONE_MOVABLE (and we actually need this buffer in ZONE_DMA). But we can introduce a new migrate type, MIGRATE_SWIOTLB, and movable page allocations can use this range. The CMA allocations go to free_list[MIGRATE_CMA], so they won't overlap. One of the downsides is that migrating movable pages still needs a sleep-able context. Another potential confusion is is_swiotlb_buffer() for pages in this range allocated through the normal page allocator. We may need to check the slots as well rather than just the buffer boundaries. (we are actually looking at a MIGRATE_METADATA type for the arm64 memory tagging extension which uses a 3% carveout of the RAM for storing the tags and people want that reused somehow; we have some WIP patches but we'll post them later this summer) > This could still be combined with more aggressive use of per-device > swiotlb area, which is probably a good idea based on some hints. > E.g. device could hint an amount of inflight DMA to the DMA layer, > and if there are addressing limitations and the amout is large enough > that could cause the allocation of a per-device swiotlb area. If we go for one large-ish per-device buffer for specific cases, maybe something similar to the rmem_swiotlb_setup() but which can be dynamically allocated at run-time and may live alongside the default swiotlb. The advantage is that it uses a similar slot tracking to the default swiotlb, no need to invent another. This per-device buffer could also be allocated from the MIGRATE_SWIOTLB range if we make it large enough at boot. It would be seen just a local accelerator for devices that use bouncing frequently or from irq context. -- Catalin