From: Leon Romanovsky <leon@kernel.org>
To: Shiraz Saleem <shiraz.saleem@intel.com>
Cc: dledford@redhat.com, jgg@ziepe.ca, davem@davemloft.net,
linux-rdma@vger.kernel.org, netdev@vger.kernel.org,
mustafa.ismail@intel.com, jeffrey.t.kirsher@intel.com
Subject: Re: [RFC v1 11/19] RDMA/irdma: Add PBLE resource manager
Date: Wed, 27 Feb 2019 08:58:04 +0200 [thread overview]
Message-ID: <20190227065804.GD11231@mtr-leonro.mtl.com> (raw)
In-Reply-To: <20190215171107.6464-12-shiraz.saleem@intel.com>
[-- Attachment #1: Type: text/plain, Size: 5284 bytes --]
On Fri, Feb 15, 2019 at 11:10:58AM -0600, Shiraz Saleem wrote:
> From: Mustafa Ismail <mustafa.ismail@intel.com>
>
> Implement a Physical Buffer List Entry (PBLE) resource manager
> to manage a pool of PBLE HMC resource objects.
>
> Signed-off-by: Mustafa Ismail <mustafa.ismail@intel.com>
> Signed-off-by: Shiraz Saleem <shiraz.saleem@intel.com>
> ---
> drivers/infiniband/hw/irdma/pble.c | 520 +++++++++++++++++++++++++++++++++++++
> drivers/infiniband/hw/irdma/pble.h | 135 ++++++++++
> 2 files changed, 655 insertions(+)
> create mode 100644 drivers/infiniband/hw/irdma/pble.c
> create mode 100644 drivers/infiniband/hw/irdma/pble.h
>
> diff --git a/drivers/infiniband/hw/irdma/pble.c b/drivers/infiniband/hw/irdma/pble.c
> new file mode 100644
> index 0000000..66fab69
> --- /dev/null
> +++ b/drivers/infiniband/hw/irdma/pble.c
> @@ -0,0 +1,520 @@
> +// SPDX-License-Identifier: GPL-2.0 or Linux-OpenIB
> +/* Copyright (c) 2019, Intel Corporation. */
> +
> +#include "osdep.h"
> +#include "status.h"
> +#include "hmc.h"
> +#include "defs.h"
> +#include "type.h"
> +#include "protos.h"
> +#include "pble.h"
> +
> +static enum irdma_status_code add_pble_prm(struct irdma_hmc_pble_rsrc *pble_rsrc);
> +
> +/**
> + * irdma_destroy_pble_prm - destroy prm during module unload
> + * @pble_rsrc: pble resources
> + */
> +void irdma_destroy_pble_prm(struct irdma_hmc_pble_rsrc *pble_rsrc)
> +{
> + struct irdma_sc_dev *dev = pble_rsrc->dev;
> + struct irdma_chunk *chunk;
> + struct irdma_pble_prm *pinfo = &pble_rsrc->pinfo;
> +
> + while (!list_empty(&pinfo->clist)) {
> + chunk = (struct irdma_chunk *)pinfo->clist.next;
> + list_del(&chunk->list);
> + if (chunk->type == PBLE_SD_PAGED)
> + irdma_pble_free_paged_mem(chunk);
> + if (chunk->bitmapbuf)
> + irdma_free_virt_mem(dev->hw, &chunk->bitmapmem);
> + irdma_free_virt_mem(dev->hw, &chunk->chunkmem);
> + }
> +}
> +
> +/**
> + * irdma_hmc_init_pble - Initialize pble resources during module load
> + * @dev: irdma_sc_dev struct
> + * @pble_rsrc: pble resources
> + */
> +enum irdma_status_code
> +irdma_hmc_init_pble(struct irdma_sc_dev *dev,
> + struct irdma_hmc_pble_rsrc *pble_rsrc)
> +{
> + struct irdma_hmc_info *hmc_info;
> + u32 fpm_idx = 0;
> + enum irdma_status_code status = 0;
> +
> + hmc_info = dev->hmc_info;
> + pble_rsrc->dev = dev;
> + pble_rsrc->fpm_base_addr = hmc_info->hmc_obj[IRDMA_HMC_IW_PBLE].base;
> + /* Start pble' on 4k boundary */
> + if (pble_rsrc->fpm_base_addr & 0xfff)
> + fpm_idx = (PAGE_SIZE - (pble_rsrc->fpm_base_addr & 0xfff)) >> 3;
> + pble_rsrc->unallocated_pble =
> + hmc_info->hmc_obj[IRDMA_HMC_IW_PBLE].cnt - fpm_idx;
> + pble_rsrc->next_fpm_addr = pble_rsrc->fpm_base_addr + (fpm_idx << 3);
> + pble_rsrc->pinfo.pble_shift = PBLE_SHIFT;
> +
> + spin_lock_init(&pble_rsrc->pinfo.prm_lock);
> + INIT_LIST_HEAD(&pble_rsrc->pinfo.clist);
> + if (add_pble_prm(pble_rsrc)) {
> + irdma_destroy_pble_prm(pble_rsrc);
> + status = IRDMA_ERR_NO_MEMORY;
> + }
> +
> + return status;
> +}
> +
> +/**
> + * get_sd_pd_idx - Returns sd index, pd index and rel_pd_idx from fpm address
> + * @ pble_rsrc: structure containing fpm address
> + * @ idx: where to return indexes
> + */
> +static void get_sd_pd_idx(struct irdma_hmc_pble_rsrc *pble_rsrc,
> + struct sd_pd_idx *idx)
> +{
> + idx->sd_idx = (u32)(pble_rsrc->next_fpm_addr) /
> + IRDMA_HMC_DIRECT_BP_SIZE;
> + idx->pd_idx = (u32)(pble_rsrc->next_fpm_addr) / IRDMA_HMC_PAGED_BP_SIZE;
> + idx->rel_pd_idx = (idx->pd_idx % IRDMA_HMC_PD_CNT_IN_SD);
The amount of type-casting in this driver is astonishing. It will be
better to declare all types to be aligned from the beginning.
> +}
> +
> +/**
> + * add_sd_direct - add sd direct for pble
> + * @pble_rsrc: pble resource ptr
> + * @info: page info for sd
> + */
> +static enum irdma_status_code
> +add_sd_direct(struct irdma_hmc_pble_rsrc *pble_rsrc,
> + struct irdma_add_page_info *info)
> +{
> + struct irdma_sc_dev *dev = pble_rsrc->dev;
> + enum irdma_status_code ret_code = 0;
> + struct sd_pd_idx *idx = &info->idx;
> + struct irdma_chunk *chunk = info->chunk;
> + struct irdma_hmc_info *hmc_info = info->hmc_info;
> + struct irdma_hmc_sd_entry *sd_entry = info->sd_entry;
> + u32 offset = 0;
> +
> + if (!sd_entry->valid) {
> + ret_code = irdma_add_sd_table_entry(dev->hw, hmc_info,
> + info->idx.sd_idx,
> + IRDMA_SD_TYPE_DIRECT,
> + IRDMA_HMC_DIRECT_BP_SIZE);
> + if (ret_code)
> + return ret_code;
> +
> + chunk->type = PBLE_SD_CONTIGOUS;
> + }
> +
> + offset = idx->rel_pd_idx << HMC_PAGED_BP_SHIFT;
> + chunk->size = info->pages << HMC_PAGED_BP_SHIFT;
> + chunk->vaddr = (u64)((u8 *)sd_entry->u.bp.addr.va + offset);
> + chunk->fpm_addr = pble_rsrc->next_fpm_addr;
> + irdma_debug(dev, IRDMA_DEBUG_PBLE,
> + "chunk_size[%lld] = 0x%llx vaddr=0x%llx fpm_addr = %llx\n",
> + chunk->size, chunk->size, chunk->vaddr, chunk->fpm_addr);
> +
> + return 0;
> +}
> +
> +/**
> + * fpm_to_idx - given fpm address, get pble index
> + * @pble_rsrc: pble resource management
> + * @addr: fpm address for index
> + */
> +static u32 fpm_to_idx(struct irdma_hmc_pble_rsrc *pble_rsrc, u64 addr)
> +{
> + u64 idx;
> +
> + idx = (addr - (pble_rsrc->fpm_base_addr)) >> 3;
> +
> + return (u32)idx;
lower_32_bits()
Thanks
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 801 bytes --]
next prev parent reply other threads:[~2019-02-27 6:58 UTC|newest]
Thread overview: 55+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-02-15 17:10 [RFC v1 00/19] Add unified Intel Ethernet RDMA driver (irdma) Shiraz Saleem
2019-02-15 17:10 ` [RFC v1 01/19] net/i40e: Add peer register/unregister to struct i40e_netdev_priv Shiraz Saleem
2019-02-15 17:22 ` Jason Gunthorpe
2019-02-21 2:19 ` Saleem, Shiraz
2019-02-21 19:35 ` Jason Gunthorpe
2019-02-22 20:13 ` Ertman, David M
2019-02-22 20:23 ` Jason Gunthorpe
2019-03-13 2:11 ` Jeff Kirsher
2019-03-13 13:28 ` Jason Gunthorpe
2019-05-10 13:31 ` Shiraz Saleem
2019-05-10 18:17 ` Jason Gunthorpe
2019-02-15 17:10 ` [RFC v1 02/19] net/ice: Create framework for VSI queue context Shiraz Saleem
2019-02-15 17:10 ` [RFC v1 03/19] net/ice: Add support for ice peer devices and drivers Shiraz Saleem
2019-02-15 17:10 ` [RFC v1 04/19] RDMA/irdma: Add driver framework definitions Shiraz Saleem
2019-02-24 15:02 ` Gal Pressman
2019-02-26 21:08 ` Saleem, Shiraz
2019-02-15 17:10 ` [RFC v1 05/19] RDMA/irdma: Implement device initialization definitions Shiraz Saleem
2019-02-15 17:10 ` [RFC v1 06/19] RDMA/irdma: Implement HW Admin Queue OPs Shiraz Saleem
2019-02-15 17:10 ` [RFC v1 07/19] RDMA/irdma: Add HMC backing store setup functions Shiraz Saleem
2019-02-15 17:10 ` [RFC v1 08/19] RDMA/irdma: Add privileged UDA queue implementation Shiraz Saleem
2019-02-24 11:42 ` Gal Pressman
2019-02-15 17:10 ` [RFC v1 09/19] RDMA/irdma: Add QoS definitions Shiraz Saleem
2019-02-15 17:10 ` [RFC v1 10/19] RDMA/irdma: Add connection manager Shiraz Saleem
2019-02-24 11:21 ` Gal Pressman
2019-02-25 18:46 ` Jason Gunthorpe
2019-02-26 21:07 ` Saleem, Shiraz
2019-02-15 17:10 ` [RFC v1 11/19] RDMA/irdma: Add PBLE resource manager Shiraz Saleem
2019-02-27 6:58 ` Leon Romanovsky [this message]
2019-02-15 17:10 ` [RFC v1 12/19] RDMA/irdma: Implement device supported verb APIs Shiraz Saleem
2019-02-15 17:35 ` Jason Gunthorpe
2019-02-15 22:19 ` Shiraz Saleem
2019-02-15 22:32 ` Jason Gunthorpe
2019-02-20 14:52 ` Saleem, Shiraz
2019-02-20 16:51 ` Jason Gunthorpe
2019-02-24 14:35 ` Gal Pressman
2019-02-25 18:50 ` Jason Gunthorpe
2019-02-26 21:09 ` Saleem, Shiraz
2019-02-26 21:09 ` Saleem, Shiraz
2019-02-27 7:31 ` Gal Pressman
2019-02-15 17:11 ` [RFC v1 13/19] RDMA/irdma: Add RoCEv2 UD OP support Shiraz Saleem
2019-02-27 6:50 ` Leon Romanovsky
2019-02-15 17:11 ` [RFC v1 14/19] RDMA/irdma: Add user/kernel shared libraries Shiraz Saleem
2019-02-15 17:11 ` [RFC v1 15/19] RDMA/irdma: Add miscellaneous utility definitions Shiraz Saleem
2019-02-15 17:47 ` Jason Gunthorpe
2019-02-20 7:51 ` Leon Romanovsky
2019-02-20 14:53 ` Saleem, Shiraz
2019-02-20 16:53 ` Jason Gunthorpe
2019-02-15 17:11 ` [RFC v1 16/19] RDMA/irdma: Add dynamic tracing for CM Shiraz Saleem
2019-02-15 17:11 ` [RFC v1 17/19] RDMA/irdma: Add ABI definitions Shiraz Saleem
2019-02-15 17:16 ` Jason Gunthorpe
2019-02-20 14:52 ` Saleem, Shiraz
2019-02-20 16:50 ` Jason Gunthorpe
2019-02-15 17:11 ` [RFC v1 18/19] RDMA/irdma: Add Kconfig and Makefile Shiraz Saleem
2019-02-15 17:11 ` [RFC v1 19/19] RDMA/irdma: Update MAINTAINERS file Shiraz Saleem
2019-02-15 17:20 ` [RFC v1 00/19] Add unified Intel Ethernet RDMA driver (irdma) Jason Gunthorpe
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20190227065804.GD11231@mtr-leonro.mtl.com \
--to=leon@kernel.org \
--cc=davem@davemloft.net \
--cc=dledford@redhat.com \
--cc=jeffrey.t.kirsher@intel.com \
--cc=jgg@ziepe.ca \
--cc=linux-rdma@vger.kernel.org \
--cc=mustafa.ismail@intel.com \
--cc=netdev@vger.kernel.org \
--cc=shiraz.saleem@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).