From mboxrd@z Thu Jan 1 00:00:00 1970 From: "Hanoch Haim (hhaim)" Subject: Re: mlx5 reta size is dynamic Date: Thu, 22 Mar 2018 09:02:19 +0000 Message-ID: <92a7d23b9df748b6af83f7dda88672e4@XCH-RTP-017.cisco.com> References: <1b6a9384a5604f15948162766cde90a9@XCH-RTP-017.cisco.com> <20180321214749.GA53128@yongseok-MBP.local> <20180322085441.a3o2eyvols7jkzxo@laranjeiro-vm.dev.6wind.com> Mime-Version: 1.0 Content-Type: text/plain; charset="iso-8859-1" Content-Transfer-Encoding: quoted-printable Cc: Yongseok Koh , "dev@dpdk.org" To: =?iso-8859-1?Q?N=E9lio_Laranjeiro?= Return-path: Received: from rcdn-iport-6.cisco.com (rcdn-iport-6.cisco.com [173.37.86.77]) by dpdk.org (Postfix) with ESMTP id 560D67CBD for ; Thu, 22 Mar 2018 10:02:43 +0100 (CET) In-Reply-To: <20180322085441.a3o2eyvols7jkzxo@laranjeiro-vm.dev.6wind.com> Content-Language: en-US List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Hi Nelio,=20 I think you didn't understand me. I suggest to keep the RETA table size con= stant (maximum 512 in your case) and don't change its base on the number of= configured Rx-queue. This will make the DPDK API consistent. As a user I need to do tricks (allo= cate an odd/prime number of rx-queues) to get the RETA size constant at 512= =20 I'm not talking about changing the values in the RETA table which can be do= ne while there is traffic.=20 Thanks,=20 Hanoh -----Original Message----- From: N=E9lio Laranjeiro [mailto:nelio.laranjeiro@6wind.com]=20 Sent: Thursday, March 22, 2018 10:55 AM To: Hanoch Haim (hhaim) Cc: Yongseok Koh; dev@dpdk.org Subject: Re: [dpdk-dev] mlx5 reta size is dynamic On Thu, Mar 22, 2018 at 06:52:53AM +0000, Hanoch Haim (hhaim) wrote: > Hi Yongseok, >=20 >=20 > RSS has a DPDK API,application can ask for the reta table size and=20 > configure it. In your case you are assuming specific use case and=20 > change the size dynamically which solve 90% of the use-cases but break=20 > the 10% use-case. > Instead, you could provide the application a consistent API and with=20 > that 100% of the applications can work with no issue. This is what=20 > happen with Intel (ixgbe/i40e) Another minor issue the rss_key_size=20 > return as zero but internally it is 40 bytes Hi Hanoch, Legacy DPDK API has always considered there is only a single indirection ta= ble aka. RETA whereas this is not true [1][2] on this device. On MLX5 there is an indirection table per Hash Rx queue according to the li= st of queues making part of it. The Hash Rx queue is configured to make the hash with configured information: - Algorithm, - key - hash field (Verbs hash field) - Indirection table An Hash Rx queue cannot handle multiple RSS configuration, we have an Hash = Rx queue per protocol and thus a full configuration per protocol. In such situation, changing the RETA means stopping the traffic, destroying= every single flow, hash Rx queue, indirection table to remake everything w= ith the new configuration. Until then, we always recommended to any application to restart the port on= this device after a RETA update to apply this new configuration. Since the flow API is the new way to configure flows, application should mo= ve to this new one instead of using old API for such behavior. We should also remove such devop from the PMD to avoid any confusion. Regards, > Thanks, > Hanoh >=20 > -----Original Message----- > From: Yongseok Koh [mailto:yskoh@mellanox.com] > Sent: Wednesday, March 21, 2018 11:48 PM > To: Hanoch Haim (hhaim) > Cc: dev@dpdk.org > Subject: Re: [dpdk-dev] mlx5 reta size is dynamic >=20 > On Wed, Mar 21, 2018 at 06:56:33PM +0000, Hanoch Haim (hhaim) wrote: > > Hi mlx5 driver expert, > >=20 > > DPDK: 17.11 > > Any reason mlx5 driver change the rate table size dynamically based=20 > > on the rx- queues# ? >=20 > The device only supports 2^n-sized indirection table. For example, if the= number of Rx queues is 6, device can't have 1-1 mapping but the size of in= d tbl could be 8, 16, 32 and so on. If we configure it as 8 for example, 2 = out of 6 queues will have 1/4 of traffic while the rest 4 queues receives 1= /8. We thought it was too much disparity and preferred setting the max size= in order to mitigate the imbalance. >=20 > > There is a hidden assumption that the user wants to distribute the=20 > > packets evenly which is not always correct. >=20 > But it is mostly correct because RSS is used for uniform distribution. Th= e decision wasn't made based on our speculation but by many request from mu= ltiple customers. >=20 > > /* If the requested number of RX queues is not a power of two, use the > > * maximum indirection table size for better balancing. > > * The result is always rounded to the next power of two. */ > > reta_idx_n =3D (1 << log2above((rxqs_n & (rxqs_n - 1)) ? > > priv->ind_table_max_size : > > rxqs_n)); >=20 > Thanks, > Yongseok [1] https://dpdk.org/ml/archives/dev/2015-October/024668.html [2] https://dpdk.org/ml/archives/dev/2015-October/024669.html -- N=E9lio Laranjeiro 6WIND