From: Simon Horman <horms@kernel.org>
To: Shinas Rasheed <srasheed@marvell.com>
Cc: netdev@vger.kernel.org, linux-kernel@vger.kernel.org,
hgani@marvell.com, vimleshk@marvell.com, sedara@marvell.com,
egallen@redhat.com, mschmidt@redhat.com, pabeni@redhat.com,
kuba@kernel.org, wizhao@redhat.com, kheib@redhat.com,
konguyen@redhat.com, Veerasenareddy Burru <vburru@marvell.com>,
Satananda Burla <sburla@marvell.com>,
"David S. Miller" <davem@davemloft.net>,
Eric Dumazet <edumazet@google.com>
Subject: Re: [PATCH net-next v1 4/8] octeon_ep_vf: add Tx/Rx ring resource setup and cleanup
Date: Fri, 22 Dec 2023 14:10:12 +0100 [thread overview]
Message-ID: <20231222131012.GG1202958@kernel.org> (raw)
In-Reply-To: <20231221092844.2885872-5-srasheed@marvell.com>
On Thu, Dec 21, 2023 at 01:28:40AM -0800, Shinas Rasheed wrote:
> Implement Tx/Rx ring resource allocation and cleanup.
>
> Signed-off-by: Shinas Rasheed <srasheed@marvell.com>
Hi Shinas,
some minor feedback from my side which you might consider addressing
if you have to respin the series for some other reason.
...
> +/**
> + * octep_vf_setup_oq() - Setup a Rx queue.
> + *
> + * @oct: Octeon device private data structure.
> + * @q_no: Rx queue number to be setup.
> + *
> + * Allocate resources for a Rx queue.
> + */
> +static int octep_vf_setup_oq(struct octep_vf_device *oct, int q_no)
> +{
> + struct octep_vf_oq *oq;
> + u32 desc_ring_size;
> +
> + oq = vzalloc(sizeof(*oq));
> + if (!oq)
> + goto create_oq_fail;
> + oct->oq[q_no] = oq;
> +
> + oq->octep_vf_dev = oct;
> + oq->netdev = oct->netdev;
> + oq->dev = &oct->pdev->dev;
> + oq->q_no = q_no;
> + oq->max_count = CFG_GET_OQ_NUM_DESC(oct->conf);
> + oq->ring_size_mask = oq->max_count - 1;
> + oq->buffer_size = CFG_GET_OQ_BUF_SIZE(oct->conf);
> + oq->max_single_buffer_size = oq->buffer_size - OCTEP_VF_OQ_RESP_HW_SIZE;
> +
> + /* When the hardware/firmware supports additional capabilities,
> + * additional header is filled-in by Octeon after length field in
> + * Rx packets. this header contains additional packet information.
> + */
> + if (oct->fw_info.rx_ol_flags)
> + oq->max_single_buffer_size -= OCTEP_VF_OQ_RESP_HW_EXT_SIZE;
> +
> + oq->refill_threshold = CFG_GET_OQ_REFILL_THRESHOLD(oct->conf);
> +
> + desc_ring_size = oq->max_count * OCTEP_VF_OQ_DESC_SIZE;
> + oq->desc_ring = dma_alloc_coherent(oq->dev, desc_ring_size,
> + &oq->desc_ring_dma, GFP_KERNEL);
> +
> + if (unlikely(!oq->desc_ring)) {
> + dev_err(oq->dev,
> + "Failed to allocate DMA memory for OQ-%d !!\n", q_no);
> + goto desc_dma_alloc_err;
> + }
> +
> + oq->buff_info = (struct octep_vf_rx_buffer *)
> + vzalloc(oq->max_count * OCTEP_VF_OQ_RECVBUF_SIZE);
nit: There is no need to cast the return value of vzalloc()
oq->buff_info = vzalloc(oq->max_count * OCTEP_VF_OQ_RECVBUF_SIZE);
> + if (unlikely(!oq->buff_info)) {
> + dev_err(&oct->pdev->dev,
> + "Failed to allocate buffer info for OQ-%d\n", q_no);
> + goto buf_list_err;
> + }
> +
> + if (octep_vf_oq_fill_ring_buffers(oq))
> + goto oq_fill_buff_err;
> +
> + octep_vf_oq_reset_indices(oq);
> + oct->hw_ops.setup_oq_regs(oct, q_no);
> + oct->num_oqs++;
> +
> + return 0;
> +
> +oq_fill_buff_err:
> + vfree(oq->buff_info);
> + oq->buff_info = NULL;
> +buf_list_err:
> + dma_free_coherent(oq->dev, desc_ring_size,
> + oq->desc_ring, oq->desc_ring_dma);
> + oq->desc_ring = NULL;
> +desc_dma_alloc_err:
> + vfree(oq);
> + oct->oq[q_no] = NULL;
> +create_oq_fail:
> + return -1;
> +}
...
> +/**
> + * octep_vf_free_iq() - Free Tx queue resources.
> + *
> + * @iq: Octeon Tx queue data structure.
> + *
> + * Free all the resources allocated for a Tx queue.
> + */
> +static void octep_vf_free_iq(struct octep_vf_iq *iq)
> +{
> + struct octep_vf_device *oct = iq->octep_vf_dev;
> + u64 desc_ring_size, sglist_size;
> + int q_no = iq->q_no;
> +
> + desc_ring_size = OCTEP_VF_IQ_DESC_SIZE * CFG_GET_IQ_NUM_DESC(oct->conf);
> +
> + if (iq->buff_info)
> + vfree(iq->buff_info);
nit: vfree can handle a NULL argument, so there is no need to protect it
with a if condition
> +
> + if (iq->desc_ring)
> + dma_free_coherent(iq->dev, desc_ring_size,
> + iq->desc_ring, iq->desc_ring_dma);
> +
> + sglist_size = OCTEP_VF_SGLIST_SIZE_PER_PKT *
> + CFG_GET_IQ_NUM_DESC(oct->conf);
> + if (iq->sglist)
> + dma_free_coherent(iq->dev, sglist_size,
> + iq->sglist, iq->sglist_dma);
> +
> + vfree(iq);
> + oct->iq[q_no] = NULL;
> + oct->num_iqs--;
> }
...
next prev parent reply other threads:[~2023-12-22 13:10 UTC|newest]
Thread overview: 12+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-12-21 9:28 [PATCH net-next v1 0/8] add octeon_ep_vf driver Shinas Rasheed
2023-12-21 9:28 ` [PATCH net-next v1 1/8] octeon_ep_vf: Add driver framework and device initialization Shinas Rasheed
2023-12-22 17:13 ` kernel test robot
2023-12-21 9:28 ` [PATCH net-next v1 2/8] octeon_ep_vf: add hardware configuration APIs Shinas Rasheed
2023-12-21 9:28 ` [PATCH net-next v1 3/8] octeon_ep_vf: add VF-PF mailbox communication Shinas Rasheed
2023-12-21 9:28 ` [PATCH net-next v1 4/8] octeon_ep_vf: add Tx/Rx ring resource setup and cleanup Shinas Rasheed
2023-12-22 13:10 ` Simon Horman [this message]
2023-12-22 16:33 ` [EXT] " Shinas Rasheed
2023-12-21 9:28 ` [PATCH net-next v1 5/8] octeon_ep_vf: add support for ndo ops Shinas Rasheed
2023-12-21 9:28 ` [PATCH net-next v1 6/8] octeon_ep_vf: add Tx/Rx processing and interrupt support Shinas Rasheed
2023-12-21 9:28 ` [PATCH net-next v1 7/8] octeon_ep_vf: add ethtool support Shinas Rasheed
2023-12-21 9:28 ` [PATCH net-next v1 8/8] octeon_ep_vf: update MAINTAINERS Shinas Rasheed
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20231222131012.GG1202958@kernel.org \
--to=horms@kernel.org \
--cc=davem@davemloft.net \
--cc=edumazet@google.com \
--cc=egallen@redhat.com \
--cc=hgani@marvell.com \
--cc=kheib@redhat.com \
--cc=konguyen@redhat.com \
--cc=kuba@kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=mschmidt@redhat.com \
--cc=netdev@vger.kernel.org \
--cc=pabeni@redhat.com \
--cc=sburla@marvell.com \
--cc=sedara@marvell.com \
--cc=srasheed@marvell.com \
--cc=vburru@marvell.com \
--cc=vimleshk@marvell.com \
--cc=wizhao@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).