From: Milton Miller <miltonm@bga.com>
To: Andres Salomon <dilinger@queued.net>
Cc: Grant Likely <grant.likely@secretlab.ca>
Cc: devicetree-discuss@lists.ozlabs.org,
Thomas Gleixner <tglx@linutronix.de>,
Ingo Molnar <mingo@redhat.com>, "H. Peter Anvin" <hpa@zytor.com>,
linux-kernel@vger.kernel.org
Subject: Re: [PATCH 3/3] x86: OLPC: speed up device tree creation during boot (v2)
Date: Fri, 12 Nov 2010 01:48:30 -0600 [thread overview]
Message-ID: <reply-olpc-3-v2@mdm.bga.com> (raw)
In-Reply-To: <20101111214546.4e573cad@queued.net>
On Thu, 11 Nov 2010 around 21:45:46 -0800, Andres Salomon wrote:
> diff --git a/arch/x86/platform/olpc/olpc_dt.c b/arch/x86/platform/olpc/olpc_dt.c
> index b8c8ff9..0ab824d 100644
> --- a/arch/x86/platform/olpc/olpc_dt.c
> +++ b/arch/x86/platform/olpc/olpc_dt.c
> @@ -126,14 +126,31 @@ static unsigned int prom_early_allocated __initdata;
>
> void * __init prom_early_alloc(unsigned long size)
> {
> + static u8 *mem = NULL;
> + static size_t free_mem = 0;
Static variables are implicitly 0 and NULL
> void *res;
>
> - res = alloc_bootmem(size);
> - if (res)
> - memset(res, 0, size);
> -
> - prom_early_allocated += size;
> + if (free_mem < size) {
> + const size_t chunk_size = max(PAGE_SIZE, size);
> +
> + /*
> + * To mimimize the number of allocations, grab at least 4k of
> + * memory (that's an arbitrary choice that matches PAGE_SIZE on
> + * the platforms we care about, and minimizes wasted bootmem)
> + * and hand off chunks of it to callers.
> + */
> + res = mem = alloc_bootmem(chunk_size);
> + if (!res)
> + return NULL;
Oops. If alloc_bootmem fails, we loose mem but don't reset free_mem, so a
later call (possibly for a smaller chunk) may return memory starting
at NULL.
I suggest just assinging res above and then add mem = res inside this if.
Oh, this is alloc_bootmem not alloc_bootmem_nopainc ... should it be?
> + prom_early_allocated += chunk_size;
> + memset(res, 0, chunk_size);
> + free_mem = chunk_size;
> + }
>
> + /* allocate from the local cache */
> + free_mem -= size;
> + res = mem;
> + mem += size;
> return res;
> }
>
milton
next prev parent reply other threads:[~2010-11-12 7:48 UTC|newest]
Thread overview: 17+ messages / expand[flat|nested] mbox.gz Atom feed top
2010-11-12 5:45 [PATCH 3/3] x86: OLPC: speed up device tree creation during boot (v2) Andres Salomon
2010-11-12 7:48 ` Milton Miller [this message]
2010-11-12 8:27 ` Andres Salomon
2010-11-14 9:50 ` Ingo Molnar
2010-11-15 4:21 ` H. Peter Anvin
2010-11-15 7:02 ` Ingo Molnar
2010-11-15 17:43 ` H. Peter Anvin
2010-11-17 6:12 ` [PATCH 3/3] x86: OLPC: speed up device tree creation during boot (v3) Andres Salomon
2010-11-29 23:39 ` [PATCH 3/3] x86: OLPC: speed up device tree creation during boot (v4) Andres Salomon
2010-12-16 2:58 ` [tip:x86/olpc] x86, olpc: Speed up device tree creation during boot tip-bot for Andres Salomon
2010-11-18 8:34 ` [PATCH 3/3] x86: OLPC: speed up device tree creation during boot (v2) Ingo Molnar
2010-11-18 11:02 ` Michael Ellerman
2010-11-18 15:04 ` H. Peter Anvin
2010-11-18 17:41 ` Andres Salomon
2010-11-18 17:48 ` H. Peter Anvin
2010-11-19 20:24 ` Andres Salomon
2010-12-23 11:57 ` Ingo Molnar
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=reply-olpc-3-v2@mdm.bga.com \
--to=miltonm@bga.com \
--cc=dilinger@queued.net \
--cc=grant.likely@secretlab.ca \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox