iommu.lists.linux-foundation.org archive mirror
 help / color / mirror / Atom feed
From: Christoph Hellwig <hch@lst.de>
To: Nicolin Chen <nicoleotsuka@gmail.com>
Cc: Christoph Hellwig <hch@lst.de>,
	Robin Murphy <robin.murphy@arm.com>,
	m.szyprowski@samsung.com, iommu@lists.linux-foundation.org,
	linux-kernel@vger.kernel.org, vdumpa@nvidia.com
Subject: Re: [PATCH RFC] dma-direct: do not allocate a single page from CMA area
Date: Tue, 20 Nov 2018 10:20:10 +0100	[thread overview]
Message-ID: <20181120092010.GA7270@lst.de> (raw)
In-Reply-To: <20181105224050.GA10411@Asurada-Nvidia.nvidia.com>

On Mon, Nov 05, 2018 at 02:40:51PM -0800, Nicolin Chen wrote:
> > > In general, this seems to make sense to me. It does represent a theoretical 
> > > change in behaviour for devices which have their own CMA area somewhere 
> > > other than kernel memory, and only ever make non-atomic allocations, but 
> > > I'm not sure whether that's a realistic or common enough case to really 
> > > worry about.
> > 
> > Yes, I think we should make the decision in dma_alloc_from_contiguous
> > based on having a per-dev CMA area or not.  There is a lot of cruft in
> 
> It seems that cma_alloc() already has a CMA area check? Would it
> be duplicated to have a similar one in dma_alloc_from_contiguous?

It isn't duplicate if it serves a different purpose.

> > this area that should be cleaned up while we're at it, like always
> > falling back to the normal page allocator if there is no CMA area or
> > nothing suitable found in dma_alloc_from_contiguous instead of
> > having to duplicate all that in the caller.
> 
> Am I supposed to clean up things that's mentioned above by moving
> the fallback allocator into dma_alloc_from_contiguous, or to just
> move my change (the count check) into dma_alloc_from_contiguous?
> 
> I understand that'd be great to have a cleanup, yet feel it could
> be done separately as this patch isn't really a cleanup change.

I can take care of any cleanups.  I've been trying to dust up that
area anyway.

  parent reply	other threads:[~2018-11-20  9:20 UTC|newest]

Thread overview: 10+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-10-31 20:03 [PATCH RFC] dma-direct: do not allocate a single page from CMA area Nicolin Chen
     [not found] ` <20181031200355.19945-1-nicoleotsuka-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
2018-11-01 14:07   ` Robin Murphy
     [not found]     ` <13d60076-33ad-b542-4d17-4d717d5aa4d3-5wv7dgnIgG8@public.gmane.org>
2018-11-01 18:04       ` Nicolin Chen
2018-11-01 19:32         ` Robin Murphy
2018-11-01 20:22           ` Nicolin Chen
2018-11-02  6:35       ` Christoph Hellwig
     [not found]         ` <20181102063542.GA17073-jcswGhMUV9g@public.gmane.org>
2018-11-05 22:40           ` Nicolin Chen
2018-11-20  2:39             ` Nicolin Chen
2018-11-20  9:20             ` Christoph Hellwig [this message]
2018-11-21  1:30               ` Nicolin Chen

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20181120092010.GA7270@lst.de \
    --to=hch@lst.de \
    --cc=iommu@lists.linux-foundation.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=m.szyprowski@samsung.com \
    --cc=nicoleotsuka@gmail.com \
    --cc=robin.murphy@arm.com \
    --cc=vdumpa@nvidia.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).