From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.1 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,SPF_HELO_NONE, SPF_PASS,USER_AGENT_GIT autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8E2BAC56202 for ; Sun, 15 Nov 2020 11:43:39 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 3FD69222C4 for ; Sun, 15 Nov 2020 11:43:39 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=kernel.org header.i=@kernel.org header.b="nHTgLmOW" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726997AbgKOLnS (ORCPT ); Sun, 15 Nov 2020 06:43:18 -0500 Received: from mail.kernel.org ([198.145.29.99]:52432 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726537AbgKOLnR (ORCPT ); Sun, 15 Nov 2020 06:43:17 -0500 Received: from localhost (thunderhill.nvidia.com [216.228.112.22]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 86E102225B; Sun, 15 Nov 2020 11:43:16 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1605440597; bh=lbJzMmcIHKZw6icm+u/fUPmEkFsHnOIOxVx+EoPyE14=; h=From:To:Cc:Subject:Date:From; b=nHTgLmOWJWMrMZOZUWS3Op37vJeQ/ixGG4vT26WQ5PEOfbC+jITx9dbOtSCaxXFzP xDpTUGbr/TdlEVLOoN4BD8fpcfVlTGRKRfJLfa7qO45zrb1GuQhCHQovlejx2iHYkj hC7LtdT6TtomGtsOX7aSuJScVMdA52J5HOrbB52c= From: Leon Romanovsky To: Doug Ledford , Jason Gunthorpe Cc: Leon Romanovsky , "David S. Miller" , Eli Cohen , Haggai Abramonvsky , Jack Morgenstein , linux-kernel@vger.kernel.org, linux-rdma@vger.kernel.org, "majd@mellanox.com" , Matan Barak , Or Gerlitz , Roland Dreier , Sagi Grimberg , Yishai Hadas Subject: [PATCH rdma-next v1 0/7] Use ib_umem_find_best_pgsz() for all umems Date: Sun, 15 Nov 2020 13:43:04 +0200 Message-Id: <20201115114311.136250-1-leon@kernel.org> X-Mailer: git-send-email 2.28.0 MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Leon Romanovsky Changelog: v1: * Added patch for raw QP * Fixed commit messages v0: https://lore.kernel.org/lkml/20201026132635.1337663-1-leon@kernel.org ------------------------- >From Jason: Move the remaining cases working with umems to use versions of ib_umem_find_best_pgsz() tailored to the calculations the devices requires. Unlike a MR there is no IOVA, instead a page offset from the starting page is possible, with various restrictions. Compute the best page size to meet the page_offset restrictions. Thanks Jason Gunthorpe (7): RDMA/mlx5: Use ib_umem_find_best_pgoff() for SRQ RDMA/mlx5: Use mlx5_umem_find_best_quantized_pgoff() for WQ RDMA/mlx5: Directly compute the PAS list for raw QP RQ's RDMA/mlx5: Use mlx5_umem_find_best_quantized_pgoff() for QP RDMA/mlx5: mlx5_umem_find_best_quantized_pgoff() for CQ RDMA/mlx5: Use ib_umem_find_best_pgsz() for devx RDMA/mlx5: Lower setting the umem's PAS for SRQ drivers/infiniband/hw/mlx5/cq.c | 48 +++++--- drivers/infiniband/hw/mlx5/devx.c | 66 ++++++----- drivers/infiniband/hw/mlx5/mem.c | 115 +++++++------------ drivers/infiniband/hw/mlx5/mlx5_ib.h | 47 +++++++- drivers/infiniband/hw/mlx5/qp.c | 165 ++++++++++++--------------- drivers/infiniband/hw/mlx5/srq.c | 27 +---- drivers/infiniband/hw/mlx5/srq.h | 1 + drivers/infiniband/hw/mlx5/srq_cmd.c | 80 ++++++++++++- include/rdma/ib_umem.h | 42 +++++++ 9 files changed, 343 insertions(+), 248 deletions(-) -- 2.28.0