From mboxrd@z Thu Jan 1 00:00:00 1970 From: Ivan Khoronzhuk Subject: Re: [PATCH net-next 3/3] net: ethernet: ti: cpsw: add XDP support Date: Mon, 27 May 2019 21:10:46 +0300 Message-ID: <20190527181043.GA4246@khorivan> References: <20190523182035.9283-1-ivan.khoronzhuk@linaro.org> <20190523182035.9283-4-ivan.khoronzhuk@linaro.org> <20190524135418.5408591e@carbon> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii; format=flowed Return-path: Content-Disposition: inline In-Reply-To: <20190524135418.5408591e@carbon> Sender: linux-kernel-owner@vger.kernel.org To: Jesper Dangaard Brouer Cc: grygorii.strashko@ti.com, davem@davemloft.net, ast@kernel.org, linux-kernel@vger.kernel.org, linux-omap@vger.kernel.org, xdp-newbies@vger.kernel.org, ilias.apalodimas@linaro.org, netdev@vger.kernel.org, daniel@iogearbox.net, jakub.kicinski@netronome.com, john.fastabend@gmail.com, Tariq Toukan List-Id: linux-omap@vger.kernel.org On Fri, May 24, 2019 at 01:54:18PM +0200, Jesper Dangaard Brouer wrote: >On Thu, 23 May 2019 21:20:35 +0300 >Ivan Khoronzhuk wrote: > >> Add XDP support based on rx page_pool allocator, one frame per page. >> Page pool allocator is used with assumption that only one rx_handler >> is running simultaneously. DMA map/unmap is reused from page pool >> despite there is no need to map whole page. > >When using page_pool for DMA-mapping, your XDP-memory model must use >1-page per packet, which you state you do. This is because >__page_pool_put_page() fallback mode does a __page_pool_clean_page() >unmapping the DMA. Ilias and I are looking at options for removing this >restriction as Mlx5 would need it (when we extend the SKB to return >pages to page_pool). Thank for what you do, it can simplify a lot... > >Unfortunately, I've found another blocker for drivers using the DMA >mapping feature of page_pool. We don't properly handle the case, where >a remote TX-driver have xdp_frame's in-flight, and simultaneously the >sending driver is unloaded and take down the page_pool. Nothing crash, >but we end-up calling put_page() on a page that is still DMA-mapped. Seems so, ... for generic solution, but looks like in case of cpsw there is no issue due to "like direct" dma map by adding offset, so whether page_pool dma map or dma map/unmap per rx/xmit, shouldn't be big difference. Not sure about all SoCs thought... Despite of it, for cpsw I keep page_pool while down/up that I'm going to change in v2. > >I'm working on different solutions for fixing this, see here: > https://github.com/xdp-project/xdp-project/blob/master/areas/mem/page_pool03_shutdown_inflight.org Hope there will be no changes in page_pool API. >-- Best regards, > Jesper Dangaard Brouer > MSc.CS, Principal Kernel Engineer at Red Hat > LinkedIn: http://www.linkedin.com/in/brouer -- Regards, Ivan Khoronzhuk