From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 797ADC38145 for ; Wed, 7 Sep 2022 14:30:47 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230257AbiIGOap (ORCPT ); Wed, 7 Sep 2022 10:30:45 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40740 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230251AbiIGOaX (ORCPT ); Wed, 7 Sep 2022 10:30:23 -0400 Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:3::133]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D1E502A73D; Wed, 7 Sep 2022 07:30:06 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=In-Reply-To:Content-Type:MIME-Version :References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=2l7eopzr/mSNZZDgV9tkZ0XntKy0XuXzIO3nOXotg4k=; b=mphsyxJhKDUVK5RBxhAHtdvWi2 jNXtUct8LS44ITwnw64hqzo5kmIoDqPPUfWhWRMMO9OVboa7RdnB/GgkKGPxIMze414KcEXmOK3YN YxA8NSddMLrJckJZPfmTtDmRgHm/roVjbPbxC2t5VRr1H/0IgHUqVEZzkohOgjihNHiuofn/2bGAl UcAznerDMMF+UDGBUALmm7NCuygA/ThL9g/gWRrjk9RfqBtvZpIsloJ++xyq2uKLzjfYrjeLGM+Th WvVI1AEZNBUmFfnKZMkimlU2X7IX14CSGkaTMPpOHytqtoyJn6BOtRHBv8VXuHM3jsAXU67bb0rxL zXAGAdUA==; Received: from hch by bombadil.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1oVw3y-006tuK-Vl; Wed, 07 Sep 2022 14:29:58 +0000 Date: Wed, 7 Sep 2022 07:29:58 -0700 From: Christoph Hellwig To: Jason Gunthorpe Cc: Christoph Hellwig , Christian =?iso-8859-1?Q?K=F6nig?= , Alex Williamson , Cornelia Huck , dri-devel@lists.freedesktop.org, kvm@vger.kernel.org, linaro-mm-sig@lists.linaro.org, linux-media@vger.kernel.org, Sumit Semwal , Daniel Vetter , Leon Romanovsky , linux-rdma@vger.kernel.org, Maor Gottlieb , Oded Gabbay , Dan Williams Subject: Re: [PATCH v2 4/4] vfio/pci: Allow MMIO regions to be exported through dma-buf Message-ID: References: <0-v2-472615b3877e+28f7-vfio_dma_buf_jgg@nvidia.com> <4-v2-472615b3877e+28f7-vfio_dma_buf_jgg@nvidia.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org On Wed, Sep 07, 2022 at 09:33:11AM -0300, Jason Gunthorpe wrote: > Yes, you said that, and I said that when the AMD driver first merged > it - but it went in anyhow and now people are using it in a bunch of > places. drm folks made up their own weird rules, if they internally stick to it they have to listen to it given that they ignore review comments, but it violates the scatterlist API and has not business anywhere else in the kernel. And yes, there probably is a reason or two why the drm code is unusually error prone. > > Why would small BARs be problematic for the pages? The pages are more > > a problem for gigantic BARs do the memory overhead. > > How do I get a struct page * for a 4k BAR in vfio? I guess we have different definitions of small then :) But unless my understanding of the code is out out of data, memremap_pages just requires the (virtual) start address to be 2MB aligned, not the size. Adding Dan for comments. That being said, what is the point of mapping say a 4k BAR for p2p? You're not going to save a measurable amount of CPU overhead if that is the only place you transfer to.