linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Jerome Glisse <jglisse@redhat.com>
To: Nicolin Chen <nicoleotsuka@gmail.com>
Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org
Subject: Re: mm/hmm: a simple question regarding devm_request_mem_region()
Date: Wed, 21 Mar 2018 18:56:32 -0400	[thread overview]
Message-ID: <20180321225632.GI3214@redhat.com> (raw)
In-Reply-To: <20180321222357.GA31089@Asurada-Nvidia>

On Wed, Mar 21, 2018 at 03:23:57PM -0700, Nicolin Chen wrote:
> Hello Jerome,
> 
> I started to looking at the mm/hmm code and having a question at the
> devm_request_mem_region() call in the hmm_devmem_add() implementation:
> 
> >	addr = min((unsigned long)iomem_resource.end,
> >		   (1UL << MAX_PHYSMEM_BITS) - 1);
> 
> The main question is here as I am a bit confused by this addr. The code
> is trying to get an addr from the end of memory space. However, I have
> tried on an ARM64 platform where ioport_resource.end is -1, so it takes
> "(1UL << MAX_PHYSMEM_BITS) - 1" as the addr base, while this addr is way
> beyond the actual main memory size that's available on my board. Is HMM
> supposed to get an memory region like this? Would it be possible for you
> to give some hint to help me understand it?

What are you trying to do ? hmm_devmem_add() is use either for device
private memory or device public memory. Device private memory is memory
that is not accessible by the CPU, the code you are pointing to is for
that case where i try to find a range of physical address not currently
use (memory not being accessible means that there is not any valid
physical address reserved for it). On x86 MAX_PHYSMEM_BITS is defined
to something that make sense, but as it is often the case for those
define, it seems that arm define an unreal value. My advice fix the
definition for ARM iirc it depends on the SOC dunno if you can know
that at build time. You can probably know the biggest one at build time
(1 << 47 or something like that).

But this all assume that you have a device with its own memory that is
not accessible from the CPU. Which is very uncommon on ARM, only case
i know of is regular PCIE GPU on a ARM system with PCIE.

Hope this helps,
Jerome

  reply	other threads:[~2018-03-21 22:56 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-03-21 22:23 mm/hmm: a simple question regarding devm_request_mem_region() Nicolin Chen
2018-03-21 22:56 ` Jerome Glisse [this message]
2018-03-22  0:23   ` Nicolin Chen
2018-03-22  0:32     ` Jerome Glisse
2018-03-22  2:00       ` Nicolin Chen

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20180321225632.GI3214@redhat.com \
    --to=jglisse@redhat.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=nicoleotsuka@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).