From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-db5eur01on0045.outbound.protection.outlook.com ([104.47.2.45]:20744 "EHLO EUR01-DB5-obe.outbound.protection.outlook.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1751269AbeCTHoH (ORCPT ); Tue, 20 Mar 2018 03:44:07 -0400 Subject: Re: [bpf-next V3 PATCH 13/15] mlx5: use page_pool for xdp_return_frame call To: Jesper Dangaard Brouer , Tariq Toukan Cc: netdev@vger.kernel.org, =?UTF-8?B?QmrDtnJuVMO2cGVs?= , magnus.karlsson@intel.com, eugenia@mellanox.com, Jason Wang , John Fastabend , Eran Ben Elisha , Saeed Mahameed , galp@mellanox.com, Daniel Borkmann , Alexei Starovoitov References: <152062887576.27458.8590966896888512270.stgit@firesoul> <152062896782.27458.5542026179434739900.stgit@firesoul> <247192bd-761e-426c-c462-59efe3b7ca97@mellanox.com> <22866952-6bdf-2529-91a1-fb31bd2f2c2d@mellanox.com> <6e62fa6a-53c7-57a9-0493-3a48d832b479@mellanox.com> <20180319141217.416d269a@redhat.com> From: Tariq Toukan Message-ID: <66f0da5a-388d-5ddc-4bb7-441f6df4af96@mellanox.com> Date: Tue, 20 Mar 2018 09:43:56 +0200 MIME-Version: 1.0 In-Reply-To: <20180319141217.416d269a@redhat.com> Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-US Content-Transfer-Encoding: 8bit Sender: netdev-owner@vger.kernel.org List-ID: On 19/03/2018 3:12 PM, Jesper Dangaard Brouer wrote: > On Mon, 12 Mar 2018 15:20:06 +0200 Tariq Toukan wrote: > >> On 12/03/2018 12:16 PM, Tariq Toukan wrote: >>> >>> On 12/03/2018 12:08 PM, Tariq Toukan wrote: >>>> >>>> On 09/03/2018 10:56 PM, Jesper Dangaard Brouer wrote: >>>>> This patch shows how it is possible to have both the driver local page >>>>> cache, which uses elevated refcnt for "catching"/avoiding SKB >>>>> put_page.  And at the same time, have pages getting returned to the >>>>> page_pool from ndp_xdp_xmit DMA completion. >>>>> > [...] >>>>> >>>>> Before this patch: single flow performance was 6Mpps, and if I started >>>>> two flows the collective performance drop to 4Mpps, because we hit the >>>>> page allocator lock (further negative scaling occurs). >>>>> >>>>> V2: Adjustments requested by Tariq >>>>>   - Changed page_pool_create return codes not return NULL, only >>>>>     ERR_PTR, as this simplifies err handling in drivers. >>>>>   - Save a branch in mlx5e_page_release >>>>>   - Correct page_pool size calc for MLX5_WQ_TYPE_LINKED_LIST_STRIDING_RQ >>>>> >>>>> Signed-off-by: Jesper Dangaard Brouer >>>>> --- >>>> >>>> I am running perf tests with your series. I sense a drastic >>>> degradation in regular TCP flows, I'm double checking the numbers now... >>> >>> Well, there's a huge performance degradation indeed, whenever the >>> regular flows (non-XDP) use the new page pool. Cannot merge before >>> fixing this. >>> >>> If I disable the local page-cache, numbers get as low as 100's of Mbps >>> in TCP stream tests. >> >> It seems that the page-pool doesn't fit as a general fallback (when page >> in local rx cache is busy), as the refcnt is elevated/changing: > > I see the issue. I have to go over the details in the driver, but I > think it should be sufficient to remove the WARN(). When the page_pool > was integrated with the MM-layer, being invoked from the put_page() > call itself, this would indicate a likely API misuse. But now, with > the page refcnt based recycle tricks, it is the norm (for non-XDP) that > put_page is called without the knowledge of page_pool. > I see, I'll remove the WARN and test. > >> [ 7343.086102] ------------[ cut here ]------------ >> [ 7343.086103] __page_pool_put_page() violating page_pool invariance refcnt:0 >> [ 7343.086114] WARNING: CPU: 1 PID: 17 at net/core/page_pool.c:291 __page_pool_put_page+0x7c/0xa0 > > Here page_pool actually catch the page refcnt race correctly, and does > the proper handling of returning it to the page allocator (via __put_page). > > I do notice (in the page_pool code) that in case page_pool handles DMA > mapping (which isn't the case, yet), that I'm missing a DMA unmap > release in the code. > I didn't get this one. Both DMA map/unmap do not exist yet in page pool, no?