From: Marco Stornelli <marco.stornelli@coritel.it>
To: Ruksen INANIR <rukseninanir@gmail.com>
Cc: linuxppc-embedded@ozlabs.org
Subject: Re: increase lowmem value for ppc
Date: Thu, 17 Jul 2008 13:35:21 +0200 [thread overview]
Message-ID: <487F2E79.1030102@coritel.it> (raw)
In-Reply-To: <487F2B7D.6080109@gmail.com>
Ruksen INANIR ha scritto:
>
> Is there any side effects of changing the address splitting?
Yes, in this way the application address space is smaller than normal.
Some big applications (for example same dbms) might not work. Usually
with 2G/2G most of the applications work well.
> What should i change for 2G/2G splitting? should i shift the kernel start?
> Thanks
>
Yes in advanced options of the kernel menu, but sometimes it's not easy.
Sometimes ago I had to do the same thing but I have to change the
kernel code because some operations were hard-coded. Be careful because
it's not an easy operation. However I'd suggest you to use the highmem,
the memory performance are worse than direct memory mapping, but usually
it's not a problem because the overhead is low.
>
> Marco Stornelli wrote:
>> Marco Stornelli ha scritto:
>>> Ruksen INANIR ha scritto:
>>>>
>>>> Is there a way to increase the lowmem value for ppc. The MAX_LOW_MEM
>>>> is defined as max 768 MB. But a value around 1.5 G works with no
>>>> problem. But when i try to increase this value to 1520 or more,
>>>> kernel complaints about no space for memory allocation when loading
>>>> kernel modules.
>>>> I do not want to use HIGHMEM config. What is the max lowmem value
>>>> for a ppc system? What other setting are needed to use 1520 MB (or
>>>> more) as lowmem .
>>>> The ppc card has 2G on board memory. 2.4.22 kernel is used.
>>>>
>>>> Thanks
>>>> _______________________________________________
>>>> Linuxppc-embedded mailing list
>>>> Linuxppc-embedded@ozlabs.org
>>>> https://ozlabs.org/mailman/listinfo/linuxppc-embedded
>>>>
>>> With 32-bit arch you may not use more than 1GB to map the memory
>>> (minus some space for some kernel operation). The value of 768MB was
>>> not there by chance.
>>>
>> Only an additional comment: I meant with the address splitting 3G/1G.
>> To map more than 1GB, you have to change the splitting 2G/2G for example.
>>
>
> _______________________________________________
> Linuxppc-embedded mailing list
> Linuxppc-embedded@ozlabs.org
> https://ozlabs.org/mailman/listinfo/linuxppc-embedded
>
--
Marco Stornelli
Embedded Software Engineer
CoRiTeL - Consorzio di Ricerca sulle Telecomunicazioni
http://www.coritel.it
marco.stornelli@coritel.it
+39 06 72582838
next prev parent reply other threads:[~2008-07-17 11:33 UTC|newest]
Thread overview: 8+ messages / expand[flat|nested] mbox.gz Atom feed top
2008-07-17 10:32 increase lowmem value for ppc Ruksen INANIR
2008-07-17 11:14 ` Marco Stornelli
2008-07-17 11:21 ` Marco Stornelli
2008-07-17 11:22 ` Ruksen INANIR
2008-07-17 11:35 ` Marco Stornelli [this message]
2008-07-17 11:51 ` Ruksen INANIR
2008-07-17 12:05 ` Marco Stornelli
[not found] <mailman.2286.1216296226.2883.linuxppc-embedded@ozlabs.org>
2008-07-17 20:07 ` Siva Prasad
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=487F2E79.1030102@coritel.it \
--to=marco.stornelli@coritel.it \
--cc=linuxppc-embedded@ozlabs.org \
--cc=rukseninanir@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).