public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Christoph Hellwig <hch@lst.de>
To: Leon Romanovsky <leon@kernel.org>
Cc: Christoph Hellwig <hch@lst.de>,
	Doug Ledford <dledford@redhat.com>,
	Jason Gunthorpe <jgg@nvidia.com>,
	Maor Gottlieb <maorg@nvidia.com>,
	linux-kernel@vger.kernel.org, linux-rdma@vger.kernel.org
Subject: Re: [PATCH rdma-next v1 3/4] lib/scatterlist: Add support in dynamic allocation of SG table from pages
Date: Tue, 15 Sep 2020 18:23:39 +0200	[thread overview]
Message-ID: <20200915162339.GC24320@lst.de> (raw)
In-Reply-To: <20200910134259.1304543-4-leon@kernel.org>

> +#ifndef CONFIG_ARCH_NO_SG_CHAIN
> +struct scatterlist *sg_alloc_table_append(
> +	struct sg_table *sgt, struct page **pages, unsigned int n_pages,
> +	unsigned int offset, unsigned long size, unsigned int max_segment,
> +	gfp_t gfp_mask, struct scatterlist *prv, unsigned int left_pages);
> +#endif

Odd indentation here, we either do two tabs (my preference) or aligned
to the opening brace (what you seem to be doing elsewhere in the series).

> +	/* Check if last entry should be keeped for chainning */
> +	next_sg = sg_next(prv);
> +	if (!sg_is_last(next_sg) || left_npages == 1)
> +		return next_sg;
> +
> +	ret = sg_alloc_next(table, next_sg,
> +			    min_t(unsigned long, left_npages,
> +				  SG_MAX_SINGLE_ALLOC),
> +			    SG_MAX_SINGLE_ALLOC, gfp_mask);

Do we even need the sg_alloc_next helper added in the last patch,
given that this fairly simple function is the only caller?

> +static struct scatterlist *alloc_from_pages_common(
> +	struct sg_table *sgt, struct page **pages, unsigned int n_pages,
> +	unsigned int offset, unsigned long size, unsigned int max_segment,
> +	gfp_t gfp_mask, struct scatterlist *prv, unsigned int left_pages)

Same strange one tab indent as above.

> +#ifndef CONFIG_ARCH_NO_SG_CHAIN
> +/**
> + * sg_alloc_table_append - Allocate and initialize an sg table from
> + *                         an array of pages
> + * @sgt:	 The sg table header to use
> + * @pages:	 Pointer to an array of page pointers
> + * @n_pages:	 Number of pages in the pages array
> + * @offset:      Offset from start of the first page to the start of a buffer
> + * @size:        Number of valid bytes in the buffer (after offset)
> + * @max_segment: Maximum size of a scatterlist node in bytes (page aligned)
> + * @gfp_mask:	 GFP allocation mask
> + * @prv:	 Last populated sge in sgt
> + * @left_pages:  Left pages caller have to set after this call
> + *
> + *  Description:
> + *    If @prv is NULL, it allocates and initialize an sg table from a list of
> + *    pages. Contiguous ranges of the pages are squashed into a single
> + *    scatterlist node up to the maximum size specified in @max_segment. A user
> + *    may provide an offset at a start and a size of valid data in a buffer
> + *    specified by the page array. A user may provide @append to chain pages
> + *    to last entry in sgt. The returned sg table is released by sg_free_table.
> + *
> + * Returns:
> + *   Last SGE in sgt on success, negative error on failure.
> + *
> + * Notes:
> + *   If this function returns non-0 (eg failure), the caller must call
> + *   sg_free_table() to cleanup any leftover allocations.
> + */
> +struct scatterlist *sg_alloc_table_append(
> +	struct sg_table *sgt, struct page **pages, unsigned int n_pages,
> +	unsigned int offset, unsigned long size, unsigned int max_segment,
> +	gfp_t gfp_mask, struct scatterlist *prv, unsigned int left_pages)

One-tab indent again.

> +{
> +	return alloc_from_pages_common(sgt, pages, n_pages, offset, size,
> +				       max_segment, gfp_mask, prv, left_pages);
> +}
> +EXPORT_SYMBOL_GPL(sg_alloc_table_append);
> +#endif

So there reason I suggested to not provide sg_alloc_table_append
if CONFIG_ARCH_NO_SG_CHAIN was set was to avoid the extra
alloc_from_pages_common helper.  It might be better to move to your
run-time check and just make it conditіtional on a non-NULL prv pointer,
which would allow us to merge alloc_from_pages_common into
sg_alloc_table_append.  Sorry for leading you down this path.

  reply	other threads:[~2020-09-15 17:14 UTC|newest]

Thread overview: 11+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-09-10 13:42 [PATCH rdma-next v1 0/4] scatterlist: add sg_alloc_table_append function Leon Romanovsky
2020-09-10 13:42 ` [PATCH rdma-next v1 1/4] lib/scatterlist: Refactor sg_alloc_table_from_pages Leon Romanovsky
2020-09-15 16:16   ` Christoph Hellwig
2020-09-16  7:17     ` Leon Romanovsky
2020-09-10 13:42 ` [PATCH rdma-next v1 2/4] lib/scatterlist: Add support in dynamically allocation of SG entries Leon Romanovsky
2020-09-15 16:19   ` Christoph Hellwig
2020-09-16  7:18     ` Leon Romanovsky
2020-09-10 13:42 ` [PATCH rdma-next v1 3/4] lib/scatterlist: Add support in dynamic allocation of SG table from pages Leon Romanovsky
2020-09-15 16:23   ` Christoph Hellwig [this message]
2020-09-16  7:19     ` Leon Romanovsky
2020-09-10 13:42 ` [PATCH rdma-next v1 4/4] RDMA/umem: Move to allocate " Leon Romanovsky

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20200915162339.GC24320@lst.de \
    --to=hch@lst.de \
    --cc=dledford@redhat.com \
    --cc=jgg@nvidia.com \
    --cc=leon@kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-rdma@vger.kernel.org \
    --cc=maorg@nvidia.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox