From mboxrd@z Thu Jan 1 00:00:00 1970 From: Sagi Grimberg Subject: Re: Question on mlx5 PMD txq memory registration Date: Wed, 19 Jul 2017 09:21:39 +0300 Message-ID: <85c0b1d9-bbf3-c6ab-727f-f508c5e5f584@grimberg.me> References: <75d08202-1882-7660-924c-b6dbb4455b88@grimberg.me> <20170717210222.j4dwxiujqdlqhlp2@shalom> Mime-Version: 1.0 Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: 7bit Cc: dev@dpdk.org, Shahaf Shuler , Yongseok Koh , Roy Shterman , Alexander Solganik , Leon Romanovsky To: =?UTF-8?Q?N=c3=a9lio_Laranjeiro?= Return-path: Received: from mail-wm0-f51.google.com (mail-wm0-f51.google.com [74.125.82.51]) by dpdk.org (Postfix) with ESMTP id 58BD33977 for ; Wed, 19 Jul 2017 08:21:42 +0200 (CEST) Received: by mail-wm0-f51.google.com with SMTP id g127so5071307wmd.0 for ; Tue, 18 Jul 2017 23:21:42 -0700 (PDT) In-Reply-To: <20170717210222.j4dwxiujqdlqhlp2@shalom> Content-Language: en-US List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" > There is none, if you send a burst of 9 packets each one coming from a > different mempool the first one will be dropped. Its worse than just a drop, without debug enabled the error completion is ignored, and the wqe_pi is taken from an invalid field, which leads to bogus mbufs free (elts_tail is not valid). >> AFAICT, it is the driver responsibility to guarantee to never deregister >> a memory region that has inflight send operations posted, otherwise >> the send operation *will* complete with a local protection error. Is >> that taken care of? > > Up to now we have assumed that the user knows its application needs and > he can increase this cache size to its needs through the configuration > item. > This way this limit and guarantee becomes true. That is an undocumented assumption. >> Another question, why is the MR cache maintained per TX queue and not >> per device? If the application starts N TX queues then a single mempool >> will be registered N times instead of just once. Having lots of MR >> instances will pollute the device ICMC pretty badly. Am I missing >> something? > > Having this cache per device needs a lock on the device structure while > threads are sending packets. Not sure why it needs a lock at all. it *may* need an rcu protection or rw_lock if at all. > Having such locks cost cycles, that is why > the cache is per queue. Another point is, having several mempool per > device is something common, whereas having several mempool per queues is > not, it seems logical to have this cache per queue for those two > reasons. > > > I am currently re-working this part of the code to improve it using > reference counters instead. The cache will remain for performance > purpose. This will fix the issues you are pointing. AFAICT, all this caching mechanism is just working around the fact that mlx5 allocates resources on top of the existing verbs interface. I think it should work like any other pmd driver, i.e. use mbuf the physical addresses. The mlx5 device (like all other rdma devices) has a global DMA lkey that spans the entire physical address space. Just about all the kernel drivers heavily use this lkey. IMO, the mlx5_pmd driver should be able to query the kernel what this lkey is and ask for the kernel to create the QP with privilege level to post send/recv operations with that lkey. And then, mlx5_pmd becomes like other drivers working with physical addresses instead of working around the memory registration sub-optimally. And while were on the subject, what is the plan of detaching mlx5_pmd from its MLNX_OFED dependency? Mellanox has been doing a good job upstreaming the needed features (rdma-core). CC'ing Leon (who is co-maintaining the user-space rdma tree.