From mboxrd@z Thu Jan 1 00:00:00 1970 From: =?iso-8859-1?Q?N=E9lio?= Laranjeiro Subject: Re: [PATCH v2 3/5] net/mlx5: use buffer address for LKEY search Date: Mon, 3 Jul 2017 16:06:05 +0200 Message-ID: <20170703140605.GE18305@autoinstall.dev.6wind.com> References: <20170628230403.10142-1-yskoh@mellanox.com> <1342e608a5a7c45b7af17e9228d6ce643e7ae40e.1498850005.git.yskoh@mellanox.com> Mime-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Transfer-Encoding: 8bit Cc: ferruh.yigit@intel.com, dev@dpdk.org, adrien.mazarguil@6wind.com To: Yongseok Koh Return-path: Received: from mail-wr0-f170.google.com (mail-wr0-f170.google.com [209.85.128.170]) by dpdk.org (Postfix) with ESMTP id A9FD02BB4 for ; Mon, 3 Jul 2017 16:06:14 +0200 (CEST) Received: by mail-wr0-f170.google.com with SMTP id 77so235016407wrb.1 for ; Mon, 03 Jul 2017 07:06:14 -0700 (PDT) Content-Disposition: inline In-Reply-To: <1342e608a5a7c45b7af17e9228d6ce643e7ae40e.1498850005.git.yskoh@mellanox.com> List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" On Fri, Jun 30, 2017 at 12:23:31PM -0700, Yongseok Koh wrote: > When searching LKEY, if search key is mempool pointer, the 2nd cacheline > has to be accessed and it even requires to check whether a buffer is > indirect per every search. Instead, using address for search key can reduce > cycles taken. And caching the last hit entry is beneficial as well. > > Signed-off-by: Yongseok Koh > --- > drivers/net/mlx5/mlx5_mr.c | 17 ++++++++++++++--- > drivers/net/mlx5/mlx5_rxtx.c | 39 +++++++++++++++++++++------------------ > drivers/net/mlx5/mlx5_rxtx.h | 4 +++- > drivers/net/mlx5/mlx5_txq.c | 3 +-- > 4 files changed, 39 insertions(+), 24 deletions(-) > > diff --git a/drivers/net/mlx5/mlx5_mr.c b/drivers/net/mlx5/mlx5_mr.c > index 0a3638460..287335179 100644 > --- a/drivers/net/mlx5/mlx5_mr.c > +++ b/drivers/net/mlx5/mlx5_mr.c > @@ -265,18 +266,28 @@ txq_mp2mr_iter(struct rte_mempool *mp, void *arg) > struct txq_mp2mr_mbuf_check_data data = { > .ret = 0, > }; > + uintptr_t start; > + uintptr_t end; > unsigned int i; > > /* Register mempool only if the first element looks like a mbuf. */ > if (rte_mempool_obj_iter(mp, txq_mp2mr_mbuf_check, &data) == 0 || > data.ret == -1) > return; > + if (mlx5_check_mempool(mp, &start, &end) != 0) { > + ERROR("mempool %p: not virtually contiguous", > + (void *)mp); > + return; > + } > for (i = 0; (i != RTE_DIM(txq_ctrl->txq.mp2mr)); ++i) { > - if (unlikely(txq_ctrl->txq.mp2mr[i].mp == NULL)) { > + struct ibv_mr *mr = txq_ctrl->txq.mp2mr[i].mr; > + > + if (unlikely(mr == NULL)) { > /* Unknown MP, add a new MR for it. */ > break; > } > - if (txq_ctrl->txq.mp2mr[i].mp == mp) > + if (start >= (uintptr_t)mr->addr && > + end <= (uintptr_t)mr->addr + mr->length) > return; > } > txq_mp2mr_reg(&txq_ctrl->txq, mp, i); if (start >= (uintptr_t)mr->addr && end <= (uintptr_t)mr->addr + mr->length) Is this expected to have a memory region bigger than the memory pool space? I mean I was expecting to see strict equality in the addresses. Regards, -- Nélio Laranjeiro 6WIND