From: Rob Herring <robh+dt@kernel.org>
To: Miles Chen <miles.chen@mediatek.com>
Cc: Frank Rowand <frowand.list@gmail.com>,
devicetree@vger.kernel.org,
"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
"moderated list:ARM/FREESCALE IMX / MXC ARM ARCHITECTURE"
<linux-arm-kernel@lists.infradead.org>,
linux-mediatek@lists.infradead.org, wsd_upstream@mediatek.com
Subject: Re: [RFC PATCH] of: make MAX_RESERVED_REGIONS configurable
Date: Sat, 24 Nov 2018 14:56:13 -0600 [thread overview]
Message-ID: <CAL_Jsq+O1YDP1yiHZpcxUaFG1LYDqi74D_kEnErP+cwGDppVWQ@mail.gmail.com> (raw)
In-Reply-To: <1542855088.15789.6.camel@mtkswgap22>
On Wed, Nov 21, 2018 at 8:51 PM Miles Chen <miles.chen@mediatek.com> wrote:
>
> On Wed, 2018-11-21 at 10:39 -0600, Rob Herring wrote:
> > On Wed, Nov 21, 2018 at 2:11 AM <miles.chen@mediatek.com> wrote:
> > >
> > > From: Miles Chen <miles.chen@mediatek.com>
> > >
> > > When we use more than 32 entries in /resered-memory,
> > > there will be an error message: "not enough space all defined regions.".
> > > We can increase MAX_RESERVED_REGIONS to fix this.
> > >
> > > commit 22f8cc6e3373 ("drivers: of: increase MAX_RESERVED_REGIONS to 32")
> > > increased MAX_RESERVED_REGIONS to 32 but I'm not sure if increasing
> > > MAX_RESERVED_REGIONS to 64 is suitable for everyone.
> > >
> > > In this RFC patch, CONFIG_MAX_OF_RESERVED_REGIONS is added and used as
> > > MAX_RESERVED_REGIONS. Add a sanity test to make sure that
> > > MAX_RESERVED_REGIONS is less than INIT_MEMBLOCK_REGIONS.
> > > Users can configure CONFIG_MAX_OF_RESERVED_REGIONS according to their
> > > need.
> >
> > I don't want a kconfig option for this. I think it should be dynamic instead.
> >
> > The current flow is like this:
> >
> > for each reserved node:
> > - call memblock_reserve
> > - Add info to reserved_mem array
> >
> > I think we should change it to:
> >
> > for each reserved node:
> > - call memblock_reserve
> > - count number of nodes
> >
> > Alloc array using memblock_alloc
> >
> > for each reserved node:
> > - Add info to reserved_mem array
> >
>
> thanks for your comment.
>
> I reviewed the flow and it might be easier to count the
> nodes and setup array first:
>
> for each reserved node:
> - count number of nodes
>
> Alloc array using memblock_alloc
>
>
> for each reserved node:
> - call memblock_reserve
The order here is wrong. It is important that you reserve the memory
blocks before doing any allocations.
> - Add info to reserved_mem array
>
> What do you think?
>
> > Rob
>
>
next prev parent reply other threads:[~2018-11-24 20:56 UTC|newest]
Thread overview: 6+ messages / expand[flat|nested] mbox.gz Atom feed top
2018-11-21 8:11 [RFC PATCH] of: make MAX_RESERVED_REGIONS configurable miles.chen
2018-11-21 16:39 ` Rob Herring
2018-11-22 2:51 ` Miles Chen
2018-11-24 20:56 ` Rob Herring [this message]
2018-11-26 1:33 ` Miles Chen
2018-11-28 1:56 ` Miles Chen
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=CAL_Jsq+O1YDP1yiHZpcxUaFG1LYDqi74D_kEnErP+cwGDppVWQ@mail.gmail.com \
--to=robh+dt@kernel.org \
--cc=devicetree@vger.kernel.org \
--cc=frowand.list@gmail.com \
--cc=linux-arm-kernel@lists.infradead.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mediatek@lists.infradead.org \
--cc=miles.chen@mediatek.com \
--cc=wsd_upstream@mediatek.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).