public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
* Allocated large blocks of memory on 64 bit linux.
@ 2006-09-20 11:28 Chris Jefferson
  2006-09-20 12:20 ` Avi Kivity
  0 siblings, 1 reply; 2+ messages in thread
From: Chris Jefferson @ 2006-09-20 11:28 UTC (permalink / raw)
  To: Linux Kernel Mailing List

I apologise for this slightly off-topic message, but I believe it can
best be answered here, and hope the question may be interesting.

Many libraries have some kind of dynamically sized container (for
example C++'s std::vector). When the container is full a new block of
memory, typically double the original size, is allocated and the old
data copied across.

On a 64 bit architecture, where the memory space is massive, it seems
at first glance a sensible thing to do might be to first make a buffer
of size 4k, and then when this fills up, just straight to something
huge, like 1MB or even 1GB, as the memory space is effectively
infinate compared to the physical memory. Obvious most of this buffer
may never be written to, as the object never grows large enough to
fill it.

What is the overhead of allocating memory which is never used? Is this
a sensible course of action on 64-bit architectures?

Thank you

^ permalink raw reply	[flat|nested] 2+ messages in thread

* Re: Allocated large blocks of memory on 64 bit linux.
  2006-09-20 11:28 Allocated large blocks of memory on 64 bit linux Chris Jefferson
@ 2006-09-20 12:20 ` Avi Kivity
  0 siblings, 0 replies; 2+ messages in thread
From: Avi Kivity @ 2006-09-20 12:20 UTC (permalink / raw)
  To: Chris Jefferson; +Cc: Linux Kernel Mailing List

Chris Jefferson wrote:
>
> I apologise for this slightly off-topic message, but I believe it can
> best be answered here, and hope the question may be interesting.
>
> Many libraries have some kind of dynamically sized container (for
> example C++'s std::vector). When the container is full a new block of
> memory, typically double the original size, is allocated and the old
> data copied across.
>
> On a 64 bit architecture, where the memory space is massive, it seems
> at first glance a sensible thing to do might be to first make a buffer
> of size 4k, and then when this fills up, just straight to something
> huge, like 1MB or even 1GB, as the memory space is effectively
> infinate compared to the physical memory. Obvious most of this buffer
> may never be written to, as the object never grows large enough to
> fill it.
>
> What is the overhead of allocating memory which is never used? Is this
>

A 1MB virtual area which has just one page instantiated has (amortized) 
2KB cost in page tables, while a similar 1GB mapping has 8KB cost. 
That's a 50%-200% overhead which is quite bad.  Also cache line usage is 
worse since each pte needs a full cache line (two for the 1GB version) now.

In addition, the virtual address space is not infinite. On x86-64, 
userspace has 47 bits = 128 TB, enough for 128K of these 1G mappings, so 
your program would exhaust it after allocating 128,000 buffers, which is 
less than a gigabyte of physical RAM.

-- 
error compiling committee.c: too many arguments to function


^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2006-09-20 12:20 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2006-09-20 11:28 Allocated large blocks of memory on 64 bit linux Chris Jefferson
2006-09-20 12:20 ` Avi Kivity

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox