From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.0 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 60732C10F03 for ; Tue, 23 Apr 2019 18:25:08 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 2DA69217D9 for ; Tue, 23 Apr 2019 18:25:08 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726075AbfDWSZG (ORCPT ); Tue, 23 Apr 2019 14:25:06 -0400 Received: from mx1.redhat.com ([209.132.183.28]:8675 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725945AbfDWSZG (ORCPT ); Tue, 23 Apr 2019 14:25:06 -0400 Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.phx2.redhat.com [10.5.11.13]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id EDEEDC04FFFA; Tue, 23 Apr 2019 18:25:05 +0000 (UTC) Received: from carbon (ovpn-200-20.brq.redhat.com [10.40.200.20]) by smtp.corp.redhat.com (Postfix) with ESMTP id 9D582608C6; Tue, 23 Apr 2019 18:24:58 +0000 (UTC) Date: Tue, 23 Apr 2019 20:24:56 +0200 From: Jesper Dangaard Brouer To: Alexander Duyck Cc: Saeed Mahameed , "davem@davemloft.net" , "netdev@vger.kernel.org" , "jakub.kicinski@netronome.com" , Tariq Toukan , "bsd@fb.com" , brouer@redhat.com Subject: Re: [net-next 01/14] net/mlx5e: RX, Add a prefetch command for small L1_CACHE_BYTES Message-ID: <20190423202456.6c506387@carbon> In-Reply-To: References: <20190422223306.31568-1-saeedm@mellanox.com> <20190422223306.31568-2-saeedm@mellanox.com> <20190422194647.372fd817@cakuba.netronome.com> <20190423152341.66a912b8@carbon> MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit X-Scanned-By: MIMEDefang 2.79 on 10.5.11.13 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.31]); Tue, 23 Apr 2019 18:25:06 +0000 (UTC) Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org On Tue, 23 Apr 2019 10:27:32 -0700 Alexander Duyck wrote: > On Tue, Apr 23, 2019 at 9:42 AM Saeed Mahameed wrote: > > > > On Tue, 2019-04-23 at 08:21 -0700, Alexander Duyck wrote: > > > On Tue, Apr 23, 2019 at 6:23 AM Jesper Dangaard Brouer > > > wrote: > > > > On Mon, 22 Apr 2019 19:46:47 -0700 > > > > Jakub Kicinski wrote: > > > > > > > > > On Mon, 22 Apr 2019 15:32:53 -0700, Saeed Mahameed wrote: > > > > > > diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en.h > > > > > > b/drivers/net/ethernet/mellanox/mlx5/core/en.h > > > > > > index 51e109fdeec1..6147be23a9b9 100644 > > > > > > --- a/drivers/net/ethernet/mellanox/mlx5/core/en.h > > > > > > +++ b/drivers/net/ethernet/mellanox/mlx5/core/en.h > > > > > > @@ -50,6 +50,7 @@ > > > > > > #include > > > > > > #include > > > > > > #include > > > > > > +#include > > > > > > #include "wq.h" > > > > > > #include "mlx5_core.h" > > > > > > #include "en_stats.h" > > > > > > @@ -986,6 +987,22 @@ static inline void mlx5e_cq_arm(struct > > > > > > mlx5e_cq *cq) > > > > > > mlx5_cq_arm(mcq, MLX5_CQ_DB_REQ_NOT, mcq->uar->map, cq- > > > > > > >wq.cc); > > > > > > } > > > > > > > > > > > > +static inline void mlx5e_prefetch(void *p) > > > > > > +{ > > > > > > + prefetch(p); > > > > > > +#if L1_CACHE_BYTES < 128 > > > > > > + prefetch(p + L1_CACHE_BYTES); > > > > > > +#endif > > > > > > +} > > > > > > + > > > > > > +static inline void mlx5e_prefetchw(void *p) > > > > > > +{ > > > > > > + prefetchw(p); > > > > > > +#if L1_CACHE_BYTES < 128 > > > > > > + prefetchw(p + L1_CACHE_BYTES); > > > > > > +#endif > > > > > > +} > > > > > > > > > > All Intel drivers do the exact same thing, perhaps it's time to > > > > > add a > > > > > helper fot this? > > > > > > > > > > net_prefetch_headers() > > > > > > > > > > or some such? > > > > > > > > I wonder if Tariq measured any effect from doing this? > > > > > > > > Because Intel CPUs will usually already prefetch the next cache- > > > > line, > > > > as described in [1], you can even read (and modify) this MSR 0x1A4 > > > > e.g. via tools in [2]. Maybe Intel guys added it before this was > > > > done > > > > in HW, and never cleaned it up? > > > > > > > > [1] > > > > https://software.intel.com/en-us/articles/disclosure-of-hw-prefetcher-control-on-some-intel-processors > > > > > > The issue is the adjacent cache line prefetcher can be on or off and > > > a > > > network driver shouldn't really be going through and twiddling those > > > sort of bits. In some cases having it on can result in more memory > > > being consumed then is needed. The reason why I enabled the > > > additional > > > cacheline prefetch for the Intel NICs is because most TCP packets are > > > at a minimum 68 bytes for just the headers so there was an advantage > > > for TCP traffic to make certain we prefetched at least enough for us > > > to process the headers. > > > > > > > So if L2 adjacent cache line prefetcher bit is enabled then this Nitpick: is it the DCU prefetcher bit that "Fetches the next cache line into L1-D cache" in the link[1]. > > additional prefetch step is redundant ? what is the performance cost in > > this case ? > > I don't recall. I don't think it would be anything too significant though. I tried to measure this (approx 1 year ago), a prefetch that is not needed, and AFAICR the overhead was below 1 nanosec, approx 0.333 ns. (but anyone claiming to be able to measure below 2 ns variation accuracy should be questioned...) > > > As far as Jakub comment about combining the functions I would be okay > > > with that. We just need to make it a static inline function available > > > to all the network drivers. > > > > > > > Agreed, will drop this patch for now and Tariq will address, in next > > version. I don't mind the patch, and Alex provided a good argument why is still makes sense. -- Best regards, Jesper Dangaard Brouer MSc.CS, Principal Kernel Engineer at Red Hat LinkedIn: http://www.linkedin.com/in/brouer