linuxppc-dev.lists.ozlabs.org archive mirror
 help / color / mirror / Atom feed
From: Pantelis Antoniou <panto@intracom.gr>
To: Andrey Volkov <avolkov@varma-el.com>
Cc: Andrew Morton <akpm@osdl.org>,
	jes@trained-monkey.org, linux-kernel@vger.kernel.org,
	linuxppc-embedded@ozlabs.org
Subject: Re: [RFC] genalloc != generic DEVICE memory allocator
Date: Thu, 22 Dec 2005 10:38:12 +0200	[thread overview]
Message-ID: <43AA65F4.10409@intracom.gr> (raw)
In-Reply-To: <43A98F90.9010001@varma-el.com>

Andrey Volkov wrote:
> Hello Jes and all
> 
> I try to use your allocator (gen_pool_xxx), idea of which
> is a cute nice thing. But current implementation of it is
> inappropriate for a _device_ (aka onchip, like framebuffer) memory
> allocation, by next reasons:
> 
>  1) Device memory is expensive resource by access time and/or size cost.
>     So we couldn't use (usually) this memory for the free blocks lists.
>  2) Device memory usually have special requirement of access to it
>     (alignment/special insn). So we couldn't use part of allocated
>     blocks for some control structures (this problem solved in your
>     implementation, it's common remark)
>  3) Obvious (IMHO) workflow of mem. allocator look like:
>  	- at startup time, driver allocate some big
> 	  (almost) static mem. chunk(s) for a control/data structures.
>         - during work of the device, driver allocate many small
> 	  mem. blocks with almost identical size.
>     such behavior lead to degeneration of buddy method and
>     transform it to the first/best fit method (with long seek
>     by the free node list).
>  4) The simple binary buddy method is far away from perfect for a device
>     due to a big internal fragmentation. Especially for a
>     network/mfd devices, for which, size of allocated data very
>     often is not a power of 2.
> 
> I start to modify your code to satisfy above demands,
> but firstly I wish to know your, or somebody else, opinion.
> 
> Especially I will very happy if somebody have and could
> provide to all, some device specific memory usage statistics.
> 

Hi Andrey,

FYI, on arch/ppc/lib/rheap.c theres an implementation of a remote heap.

It is currently used for the management of freescale's CPM1 & CPM2 internal
dual port RAM.

Take a look, it might be what you have in mind.

Regards

Pantelis

  reply	other threads:[~2005-12-22  8:43 UTC|newest]

Thread overview: 14+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2005-12-21 17:23 [RFC] genalloc != generic DEVICE memory allocator Andrey Volkov
2005-12-22  8:38 ` Pantelis Antoniou [this message]
2005-12-22 13:48   ` Andrey Volkov
2005-12-22 14:15     ` Pantelis Antoniou
2005-12-22 15:44       ` Andrey Volkov
2005-12-22 16:09         ` Pantelis Antoniou
     [not found] ` <43A9B2F1.8090402@246tNt.com>
2005-12-22 13:41   ` Andrey Volkov
2005-12-22 15:37 ` Jes Sorensen
2005-12-22 18:18   ` Andrey Volkov
2005-12-22 18:33     ` Pantelis Antoniou
2005-12-23  7:38       ` Andrey Volkov
2005-12-23  7:46         ` Pantelis Antoniou
2005-12-23 10:17           ` Andrey Volkov
2005-12-23 10:59     ` Jes Sorensen

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=43AA65F4.10409@intracom.gr \
    --to=panto@intracom.gr \
    --cc=akpm@osdl.org \
    --cc=avolkov@varma-el.com \
    --cc=jes@trained-monkey.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linuxppc-embedded@ozlabs.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).