From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.3 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,SPF_HELO_NONE, USER_AGENT_SANE_1 autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 210BFC43611 for ; Mon, 10 May 2021 15:06:01 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id EAFD161481 for ; Mon, 10 May 2021 15:06:00 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233996AbhEJPHD (ORCPT ); Mon, 10 May 2021 11:07:03 -0400 Received: from verein.lst.de ([213.95.11.211]:60477 "EHLO verein.lst.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S240600AbhEJPGl (ORCPT ); Mon, 10 May 2021 11:06:41 -0400 Received: by verein.lst.de (Postfix, from userid 2407) id 2DAB567373; Mon, 10 May 2021 17:05:17 +0200 (CEST) Date: Mon, 10 May 2021 17:05:16 +0200 From: Christoph Hellwig To: Claire Chang Cc: Rob Herring , mpe@ellerman.id.au, Joerg Roedel , Will Deacon , Frank Rowand , Konrad Rzeszutek Wilk , boris.ostrovsky@oracle.com, jgross@suse.com, Christoph Hellwig , Marek Szyprowski , benh@kernel.crashing.org, paulus@samba.org, "list@263.net:IOMMU DRIVERS" , sstabellini@kernel.org, Robin Murphy , grant.likely@arm.com, xypron.glpk@gmx.de, Thierry Reding , mingo@kernel.org, bauerman@linux.ibm.com, peterz@infradead.org, Greg KH , Saravana Kannan , "Rafael J . Wysocki" , heikki.krogerus@linux.intel.com, Andy Shevchenko , Randy Dunlap , Dan Williams , Bartosz Golaszewski , linux-devicetree , lkml , linuxppc-dev@lists.ozlabs.org, xen-devel@lists.xenproject.org, Nicolas Boichat , Jim Quinlan , tfiga@chromium.org, bskeggs@redhat.com, bhelgaas@google.com, chris@chris-wilson.co.uk, daniel@ffwll.ch, airlied@linux.ie, dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, jani.nikula@linux.intel.com, jxgao@google.com, joonas.lahtinen@linux.intel.com, linux-pci@vger.kernel.org, maarten.lankhorst@linux.intel.com, matthew.auld@intel.com, nouveau@lists.freedesktop.org, rodrigo.vivi@intel.com, thomas.hellstrom@linux.intel.com Subject: Re: [PATCH v6 08/15] swiotlb: Bounce data from/to restricted DMA pool if available Message-ID: <20210510150516.GE28066@lst.de> References: <20210510095026.3477496-1-tientzu@chromium.org> <20210510095026.3477496-9-tientzu@chromium.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20210510095026.3477496-9-tientzu@chromium.org> User-Agent: Mutt/1.5.17 (2007-11-01) Precedence: bulk List-ID: X-Mailing-List: devicetree@vger.kernel.org > +static inline bool is_dev_swiotlb_force(struct device *dev) > +{ > +#ifdef CONFIG_DMA_RESTRICTED_POOL > + if (dev->dma_io_tlb_mem) > + return true; > +#endif /* CONFIG_DMA_RESTRICTED_POOL */ > + return false; > +} > + > /* If SWIOTLB is active, use its maximum mapping size */ > if (is_swiotlb_active(dev) && > - (dma_addressing_limited(dev) || swiotlb_force == SWIOTLB_FORCE)) > + (dma_addressing_limited(dev) || swiotlb_force == SWIOTLB_FORCE || > + is_dev_swiotlb_force(dev))) This is a mess. I think the right way is to have an always_bounce flag in the io_tlb_mem structure instead. Then the global swiotlb_force can go away and be replace with this and the fact that having no io_tlb_mem structure at all means forced no buffering (after a little refactoring).