public inbox for linux-mm@kvack.org
 help / color / mirror / Atom feed
From: ebiederm+eric@ccr.net (Eric W. Biederman)
To: "Stephen C. Tweedie" <sct@redhat.com>
Cc: "Benjamin C.R. LaHaise" <blah@kvack.org>,
	Daniel Blakeley <daniel@msc.cornell.edu>,
	linux-mm@kvack.org
Subject: Re: Large memory system
Date: 08 Feb 1999 09:31:11 -0600	[thread overview]
Message-ID: <m17lts52v4.fsf@flinx.ccr.net> (raw)
In-Reply-To: "Stephen C. Tweedie"'s message of "Mon, 8 Feb 1999 11:24:58 GMT"

>>>>> "ST" == Stephen C Tweedie <sct@redhat.com> writes:

ST> Hi,
ST> On Sat, 30 Jan 1999 12:00:53 -0500 (EST), "Benjamin C.R. LaHaise"
ST> <blah@kvack.org> said:

>> Easily isn't a good way of putting it, unless you're talking about doing
>> something like mmap on /dev/mem, in which case you could make the
>> user/kernel virtual spilt weigh heavy on the user side and do memory
>> allocation yourself.  If you're talking about doing it transparently,
>> you're best bet is to do something like davem's suggested high mem
>> approach, and only use non-kernel mapped memory for user pages... if you
>> want to be able to support the page cache in high memory, things get
>> messy.

ST> No it doesn't!  The only tricky thing is IO, but we need to have bounce
ST> buffers to high memory anyway for swapping.  The page cache uses "struct
ST> page" addresses in preference to actual page data pointers almost
ST> everywhere anyway, and whenever we are doing something like read(2) or
ST> write(2) functions, we just need a single per-CPU virtual pte in the
ST> vmalloc region to temporarily map the page into memory while we copy to
ST> user space (and remember that we do this from the context of the user
ST> process anyway, so we don't have to remap the user page even if it is in
ST> high memory).

Cool.  We now have an idea that sounds possible.

The only remaining question is how much of a performance hit would changing 
the contents of a pte around all of the time be?

Every single page read/write syscall, as well as copying down to I/O bounce buffers
sounds common enough that we probably would see a performance hit.

The other thing that happens is we start breaking assumptions about fixed limits
based on architecture size.  Things like the swap entry may need to be expanded.

Eric
--
To unsubscribe, send a message with 'unsubscribe linux-mm my@address'
in the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://humbolt.geo.uu.nl/Linux-MM/

  reply	other threads:[~1999-02-08 15:31 UTC|newest]

Thread overview: 11+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
1999-01-30 13:36 Large memory system Daniel Blakeley
1999-01-30 17:00 ` Benjamin C.R. LaHaise
1999-02-08 11:24   ` Stephen C. Tweedie
1999-02-08 15:31     ` Eric W. Biederman [this message]
1999-02-09 22:57       ` Stephen C. Tweedie
1999-02-01 15:59 ` Rik van Riel
1999-02-08 11:22 ` Stephen C. Tweedie
  -- strict thread matches above, loose matches on Subject: below --
1999-02-08 20:33 Manfred Spraul
1999-02-10 14:25 ` Stephen C. Tweedie
1999-02-10 17:02 Manfred Spraul
1999-02-11 11:12 ` Stephen C. Tweedie

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=m17lts52v4.fsf@flinx.ccr.net \
    --to=ebiederm+eric@ccr.net \
    --cc=blah@kvack.org \
    --cc=daniel@msc.cornell.edu \
    --cc=linux-mm@kvack.org \
    --cc=sct@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox