From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.6 required=3.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,SPF_PASS, USER_AGENT_MUTT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id DABE4C43381 for ; Fri, 15 Feb 2019 17:35:43 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 90E5C218D9 for ; Fri, 15 Feb 2019 17:35:43 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=ziepe.ca header.i=@ziepe.ca header.b="DZJqWXbb" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728442AbfBORfm (ORCPT ); Fri, 15 Feb 2019 12:35:42 -0500 Received: from mail-pf1-f193.google.com ([209.85.210.193]:34452 "EHLO mail-pf1-f193.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726045AbfBORfm (ORCPT ); Fri, 15 Feb 2019 12:35:42 -0500 Received: by mail-pf1-f193.google.com with SMTP id j18so5156111pfe.1 for ; Fri, 15 Feb 2019 09:35:41 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ziepe.ca; s=google; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to:user-agent; bh=D0QC8OjQp5kK6LsZXIQ8y1Kwp3qmS+KNaDbHBFZL51o=; b=DZJqWXbbYwllivY6xjxitG/VYNAOMLGTExhBkatClS/kUs7YPiCuYsi8GPxzZKtfw2 X34fyJL1KVVEZvVziiWd4zu7dRSFPUVIe2fNT0T/UEF4mGySEnPOJsu1/A5/s8VLdO3S lLDt5R1i9CVCrmPB06aHWByH1hhn2bWwY+13kjlnh7TOkAiesBRNj1OsEuq4y99YqmGM IGZseoc2O6t7Dju1NxZjzc8b+qszg4plO2QdTJ0GMzvD73ymUUYEQIdjg6oQ98geEnL8 dXm0LOGWqhuv2pz9PMU5VkeTuG0ovZ5UgVZPaBGRug0tcRORJq2CVIbIifQ4CH5xgjqs 4x/w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to:user-agent; bh=D0QC8OjQp5kK6LsZXIQ8y1Kwp3qmS+KNaDbHBFZL51o=; b=md1l6ND0MtEHOAOvFiOLupwTnotj9ItsN1xq0o01oYbvr5JXwoFxEsPt7J888CYjZS F1DLaSZpBfG5e74Vkr6P7iSHwm6bQcXe5Hj8ohZFOkncyUfMALySaHxg/Xclo4HX5TBS d2JUC23YzcV6/sH7WcEN9K07xUAnKVwznxcACZNW+J0OZIRFTuFr3R7MKGFYMQONGqBp 51EXQpgfl//VcBHX1kHcVg3BVGf7OOHYZ59Z6lE+nKBsZJJVPYiVr2gL4QtGWl75AEjN ccEZjshEsZwRYAYW+UXFqfj1gBqCNCdP4lM9wGJBBqSOqfEJvOp7G8RLXcnxQTLJZuj2 VGcg== X-Gm-Message-State: AHQUAuYDFAjA7QkeO3EB/XfTZhS/eb5p5iNjxY2fzYPmSpKgS8D8ZutR 42LKwBjxK7WOmpBZDbai8265QQ== X-Google-Smtp-Source: AHgI3IbWe5zLfixT/YZqnNS7gEefByLvslAt5FYH4atfwoJ/s6Ui03xKyjNH1EzXCWSE387L/uwk0g== X-Received: by 2002:a63:5b1c:: with SMTP id p28mr6316731pgb.73.1550252140973; Fri, 15 Feb 2019 09:35:40 -0800 (PST) Received: from ziepe.ca (S010614cc2056d97f.ed.shawcable.net. [174.3.196.123]) by smtp.gmail.com with ESMTPSA id g3sm10914163pfe.37.2019.02.15.09.35.40 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Fri, 15 Feb 2019 09:35:40 -0800 (PST) Received: from jgg by mlx.ziepe.ca with local (Exim 4.90_1) (envelope-from ) id 1guhOl-0000SE-L0; Fri, 15 Feb 2019 10:35:39 -0700 Date: Fri, 15 Feb 2019 10:35:39 -0700 From: Jason Gunthorpe To: Shiraz Saleem Cc: dledford@redhat.com, davem@davemloft.net, linux-rdma@vger.kernel.org, netdev@vger.kernel.org, mustafa.ismail@intel.com, jeffrey.t.kirsher@intel.com Subject: Re: [RFC v1 12/19] RDMA/irdma: Implement device supported verb APIs Message-ID: <20190215173539.GD30706@ziepe.ca> References: <20190215171107.6464-1-shiraz.saleem@intel.com> <20190215171107.6464-13-shiraz.saleem@intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20190215171107.6464-13-shiraz.saleem@intel.com> User-Agent: Mutt/1.9.4 (2018-02-28) Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org On Fri, Feb 15, 2019 at 11:10:59AM -0600, Shiraz Saleem wrote: > +static int irdma_alloc_pd(struct ib_pd *pd, > + struct ib_ucontext *context, > + struct ib_udata *udata) > +{ > + struct irdma_pd *iwpd = to_iwpd(pd); > + struct irdma_device *iwdev = to_iwdev(pd->device); > + struct irdma_sc_dev *dev = &iwdev->rf->sc_dev; > + struct irdma_pci_f *rf = iwdev->rf; > + struct irdma_alloc_pd_resp uresp = {}; > + struct irdma_sc_pd *sc_pd; > + struct irdma_ucontext *ucontext; > + u32 pd_id = 0; > + int err; > + > + if (iwdev->closing) > + return -ENODEV; No crazy unlocked 'closing' flags. The core code takes care of everything a driver needs to worry about if you use it properly. > +/** > + * irdma_create_cq - create cq > + * @ibdev: device pointer from stack > + * @attr: attributes for cq > + * @context: user context created during alloc > + * @udata: user data > + */ > +static struct ib_cq *irdma_create_cq(struct ib_device *ibdev, > + const struct ib_cq_init_attr *attr, > + struct ib_ucontext *context, > + struct ib_udata *udata) > +{ > + struct irdma_device *iwdev = to_iwdev(ibdev); > + struct irdma_pci_f *rf = iwdev->rf; > + struct irdma_cq *iwcq; > + struct irdma_pbl *iwpbl; > + u32 cq_num = 0; > + struct irdma_sc_cq *cq; > + struct irdma_sc_dev *dev = &rf->sc_dev; > + struct irdma_cq_init_info info = {}; > + enum irdma_status_code status; > + struct irdma_cqp_request *cqp_request; > + struct cqp_cmds_info *cqp_info; > + struct irdma_cq_uk_init_info *ukinfo = &info.cq_uk_init_info; > + unsigned long flags; > + int err_code; > + int entries = attr->cqe; > + > + if (iwdev->closing) > + return ERR_PTR(-ENODEV); > + > + if (entries > rf->max_cqe) > + return ERR_PTR(-EINVAL); > + > + iwcq = kzalloc(sizeof(*iwcq), GFP_KERNEL); > + if (!iwcq) > + return ERR_PTR(-ENOMEM); > + > + err_code = irdma_alloc_rsrc(rf, rf->allocated_cqs, > + rf->max_cq, &cq_num, > + &rf->next_cq); > + if (err_code) > + goto error; > + > + cq = &iwcq->sc_cq; > + cq->back_cq = (void *)iwcq; > + spin_lock_init(&iwcq->lock); > + info.dev = dev; > + ukinfo->cq_size = max(entries, 4); > + ukinfo->cq_id = cq_num; > + iwcq->ibcq.cqe = info.cq_uk_init_info.cq_size; > + if (attr->comp_vector < rf->ceqs_count) > + info.ceq_id = attr->comp_vector; > + info.ceq_id_valid = true; > + info.ceqe_mask = 1; > + info.type = IRDMA_CQ_TYPE_IWARP; > + info.vsi = &iwdev->vsi; > + > + if (context) { Drivers should rarely write 'if context'. The test for userspaceness is 'if (udata)' - and in this case context is guarenteed. Lots of places with this wrong.. Also this will need to be rebased as this all changed. > + return (struct ib_cq *)iwcq; And don't write casts like that, &iwcq->ib_qp or something. Find and fix them all please. > +/** > + * irdma_set_page - populate pbl list for fmr > + * @ibmr: ib mem to access iwarp mr pointer > + * @addr: page dma address fro pbl list > + */ > +static int irdma_set_page(struct ib_mr *ibmr, > + u64 addr) Can you please read through this giant driver and hit various places with wonky formatting with clang-format? We don't need to start out a new driver with bonkers indentation. > + > +static const struct ib_device_ops irdma_roce_dev_ops = { > + .get_link_layer = irdma_get_link_layer, > + .query_ah = irdma_query_ah, > + .attach_mcast = irdma_attach_mcast, > + .detach_mcast = irdma_detach_mcast, > + .query_gid = irdma_query_gid_roce, > + .modify_qp = irdma_modify_qp_roce, > +}; > + > +static const struct ib_device_ops irdma_iw_dev_ops = { > + .query_gid = irdma_query_gid, > + .modify_qp = irdma_modify_qp, > +}; > + > +static const struct ib_device_ops irdma_dev_ops = { > + .get_port_immutable = irdma_port_immutable, > + .get_netdev = irdma_get_netdev, > + .query_port = irdma_query_port, > + .modify_port = irdma_modify_port, > + .query_pkey = irdma_query_pkey, > + .alloc_ucontext = irdma_alloc_ucontext, > + .dealloc_ucontext = irdma_dealloc_ucontext, > + .mmap = irdma_mmap, > + .alloc_pd = irdma_alloc_pd, > + .dealloc_pd = irdma_dealloc_pd, > + .create_qp = irdma_create_qp, > + .query_qp = irdma_query_qp, > + .destroy_qp = irdma_destroy_qp, > + .create_cq = irdma_create_cq, > + .destroy_cq = irdma_destroy_cq, > + .get_dma_mr = irdma_get_dma_mr, > + .reg_user_mr = irdma_reg_user_mr, > + .dereg_mr = irdma_dereg_mr, > + .alloc_mw = irdma_alloc_mw, > + .dealloc_mw = irdma_dealloc_mw, > + .alloc_hw_stats = irdma_alloc_hw_stats, > + .get_hw_stats = irdma_get_hw_stats, > + .query_device = irdma_query_device, > + .create_ah = irdma_create_ah, > + .destroy_ah = irdma_destroy_ah, > + .drain_sq = irdma_drain_sq, > + .drain_rq = irdma_drain_rq, > + .alloc_mr = irdma_alloc_mr, > + .map_mr_sg = irdma_map_mr_sg, > + .get_dev_fw_str = irdma_get_dev_fw_str, > + .poll_cq = irdma_poll_cq, > + .req_notify_cq = irdma_req_notify_cq, > + .post_send = irdma_post_send, > + .post_recv = irdma_post_recv, > + .disassociate_ucontext = irdma_disassociate_ucontext, > + INIT_RDMA_OBJ_SIZE(ib_pd, irdma_pd, ibpd), > +}; All lists of things should be sorted. I saw many examples of unsorted lists. > +/** > + * irdma_init_roce_device - initialization of iwarp rdma device > + * @iwibdev: irdma ib device > + */ > +static int irdma_init_iw_device(struct irdma_ib_device *iwibdev) > +{ > + struct net_device *netdev = iwibdev->iwdev->netdev; > + > + iwibdev->ibdev.node_type = RDMA_NODE_RNIC; > + ether_addr_copy((u8 *)&iwibdev->ibdev.node_guid, netdev->dev_addr); > + iwibdev->ibdev.iwcm = kzalloc(sizeof(*iwibdev->ibdev.iwcm), GFP_KERNEL); > + if (!iwibdev->ibdev.iwcm) > + return -ENOMEM; > + > + iwibdev->ibdev.iwcm->add_ref = irdma_add_ref; > + iwibdev->ibdev.iwcm->rem_ref = irdma_rem_ref; > + iwibdev->ibdev.iwcm->get_qp = irdma_get_qp; > + iwibdev->ibdev.iwcm->connect = irdma_connect; > + iwibdev->ibdev.iwcm->accept = irdma_accept; > + iwibdev->ibdev.iwcm->reject = irdma_reject; > + iwibdev->ibdev.iwcm->create_listen = irdma_create_listen; > + iwibdev->ibdev.iwcm->destroy_listen = irdma_destroy_listen; Huh. These should probably be moved into the ops structure too. > +/** > + * irdma_register_rdma_device - register iwarp device to IB > + * @iwdev: iwarp device > + */ > +int irdma_register_rdma_device(struct irdma_device *iwdev) > +{ > + int ret; > + struct irdma_ib_device *iwibdev; > + > + ret = irdma_init_rdma_device(iwdev); > + if (ret) > + return ret; > + > + iwibdev = iwdev->iwibdev; > + rdma_set_device_sysfs_group(&iwibdev->ibdev, &irdma_attr_group); > + if (iwdev->rf->sc_dev.hw_attrs.hw_rev == IRDMA_GEN_1) > + /* backward compat with old user-space libi40iw */ > + iwibdev->ibdev.driver_id = RDMA_DRIVER_I40IW; Really? Then what is the problem in rdma-core? Why are we getting a replacement driver instead of fixing the old one? This is very long, I didn't read it super closely :( Jason