public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Bernd Harries <bha@gmx.de>
To: Hugh Dickins <hugh@veritas.com>
Cc: linux-kernel@vger.kernel.org, mingo@elte.hu
Subject: Re: __get_free_pages(): is the MEM really mine?
Date: Fri, 05 Oct 2001 15:32:18 +0200	[thread overview]
Message-ID: <3BBDB662.CB729213@gmx.de> (raw)
In-Reply-To: <Pine.LNX.4.21.0110051327180.997-100000@localhost.localdomain>

Hugh Dickins wrote:


> I don't
> know whether you're following the mmap-makes-all-pages-present
> model (using remap_page_range), or the fault-page-by-page model
> (supplying your own nopage function). 

The nopage method. In Alessandro Rubini's book (p.391) I read, that I can't use remap_page_range() on pages optained by get_free_page().

> But either way it sounds like
> you bump each page count by 1 when you map it in, and then when > it's unmapped the count goes down to 0 on all the later 
> order-0-pages,

exactly that happens in the version I use on minor 26 today.

> so they get freed before you're done with them.

Hmm, the only thing that happens to them after munmap() is 
free_pages(). I don't access the pages anymore. But maybe some code in free_pages does? Or decrements count to -1?

> Either you should force page count 1 on each of the 
> order-0-pages before you mmap them in 

Yes, I do that in the version used in minor 27 today right after the allocation.

> (and raise count to 2);

by get_page(), right?

> or you should set
> the Reserved bit on each them, and clear it before freeing 
> (see use of mem_map_reserve and mem_map_unreserve in various 
> drivers/sound
> sources using remap_page_range; there's also a couple of 
> examples of the nopage method down there too).

Ok, thanks a lot. So it's definitely insufficient how my minor 26 version handles the pages, right? If so, that's a statement I can live with.

And it was never ment that I could simply mmap the upper pages to userspace directly, without 'touching' each page, was it? 

Ciao,
-- 
Bernd Harries

bha@gmx.de            http://bharries.freeyellow.com
bharries@web.de       Tel. +49 421 809 7343 priv.  | MSB First!
harries@stn-atlas.de       +49 421 457 3966 offi.  | Linux-m68k
bernd@linux-m68k.org       +49 172 139 6054 handy  | Medusa T40

  reply	other threads:[~2001-10-05 13:32 UTC|newest]

Thread overview: 17+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2001-10-01 11:33 __get_free_pages(): is the MEM really mine? Bernd Harries
2001-10-05 12:55 ` Hugh Dickins
2001-10-05 13:32   ` Bernd Harries [this message]
2001-10-05 15:27     ` Hugh Dickins
  -- strict thread matches above, loose matches on Subject: below --
2001-09-27 14:19 Bernd Harries
2001-09-27 10:06 Bernd Harries
2001-09-27 13:00 ` Ingo Molnar
2001-09-29 17:15   ` Bernd Harries
2001-09-30  7:27     ` Ingo Molnar
2001-09-30 12:59       ` Bernd Harries
2001-10-01  5:55         ` Ingo Molnar
2001-10-05  8:49           ` Bernd Harries
2001-09-27  8:56 Bernd Harries
2001-09-27  9:15 ` Ingo Molnar
2001-09-27  9:20 ` Ingo Molnar
2001-09-27 14:38 ` Eric W. Biederman
2001-09-29  7:32   ` Bernd Harries

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=3BBDB662.CB729213@gmx.de \
    --to=bha@gmx.de \
    --cc=hugh@veritas.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mingo@elte.hu \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox