From mboxrd@z Thu Jan 1 00:00:00 1970 From: =?iso-8859-1?Q?N=E9lio?= Laranjeiro Subject: Re: mlx5 flow create/destroy behaviour Date: Thu, 30 Mar 2017 15:03:20 +0200 Message-ID: <20170330130320.GR16796@autoinstall.dev.6wind.com> References: <70A7408C6E1BFB41B192A929744D8523968F8E2F@ALA-MBC.corp.ad.wrs.com> <20170328153602.GC16796@autoinstall.dev.6wind.com> <70A7408C6E1BFB41B192A929744D8523968F92EF@ALA-MBC.corp.ad.wrs.com> <20170329094523.GG16796@autoinstall.dev.6wind.com> <70A7408C6E1BFB41B192A929744D8523968F9CBF@ALA-MBC.corp.ad.wrs.com> Mime-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Transfer-Encoding: 8bit Cc: "Adrien Mazarguil (adrien.mazarguil@6wind.com)" , "dev@dpdk.org" , "Peters, Matt" To: "Legacy, Allain" Return-path: Received: from mail-wr0-f169.google.com (mail-wr0-f169.google.com [209.85.128.169]) by dpdk.org (Postfix) with ESMTP id B532EFB3F for ; Thu, 30 Mar 2017 15:03:29 +0200 (CEST) Received: by mail-wr0-f169.google.com with SMTP id w43so59861011wrb.0 for ; Thu, 30 Mar 2017 06:03:29 -0700 (PDT) Content-Disposition: inline In-Reply-To: <70A7408C6E1BFB41B192A929744D8523968F9CBF@ALA-MBC.corp.ad.wrs.com> List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Hi Allain, On Wed, Mar 29, 2017 at 12:29:59PM +0000, Legacy, Allain wrote: > > -----Original Message----- > > From: Nélio Laranjeiro [mailto:nelio.laranjeiro@6wind.com] > > Sent: Wednesday, March 29, 2017 5:45 AM > > <...> > > > Almost... the only difference is that the ETH pattern also checks for > > type=0x8100 > > > > Ethernet type was not supported in DPDK 17.02, it was submitted later in > > march [1]. Did you embed the patch in your test? > > No, but I am using the default eth mask (rte_flow_item_eth_mask) so it > looks like it is accepting any ether type even though I set the vlan > type along with the src+dst. Right, > > > > Can you compile in debug mode (by setting > > > > CONFIG_RTE_LIBRTE_MLX5_DEBUG to "y")? Then you should have as > > many > > > > print for the creation rules than the destroyed ones. > > > > > > I can give that a try. > > I ran with debug logs enabled and there are no logs coming from the > PMD that indicate an error. All create and destroy calls report a > successful result. > > I modified my test slightly yesterday to try to determine what is > happening. What I found that if I use a smaller number of flows the > problem does not happen, but as soon as I use 256 flows or greater the > problem manifests itself. What I mean is: > > test 1: > 1) start 16 flows (16 unique src MAC addresses sending to 16 unique dst MAC addresses) > 2) create flow rules > 3) check that all subsequent packets are marked correctly > 4) stop traffic > 5) destroy all flow rules > 6) wait 15 seconds > 7) repeat from (1) for 4 iterations. > > test 2: > same as test1 but with 32 flows > > test 3: > same as test1 but with 64 flows > > test 4: > same as test1 but with 128 flows > > test 5: > same as test1 but with 256 flows (this is where the problem starts > happening)... it could very well be somewhere closer to 128 but I > am stepping up by powers of 2 so this is the first occurrence. > > I also modified my test to destroy flow rules in the opposite order > that I created them just in case ordering is an issue but that had no > effect. I found an issue on the id retrieval while receiving an high rate of the same flow [1]. You may face the same issue. Can you verify with the patch? Thanks, [1] http://dpdk.org/dev/patchwork/patch/22897/ -- Nélio Laranjeiro 6WIND