linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Christoph Lameter <clameter@sgi.com>
To: Paul Jackson <pj@sgi.com>
Cc: akpm@osdl.org, menage@google.com, a.p.zijlstra@chello.nl,
	nickpiggin@yahoo.com.au, linux-mm@kvack.org, dgc@sgi.com,
	ak@suse.de
Subject: Re: [PATCH 1/5] Add a map to to track dirty pages per node
Date: Mon, 22 Jan 2007 09:41:29 -0800 (PST)	[thread overview]
Message-ID: <Pine.LNX.4.64.0701220939060.24578@schroedinger.engr.sgi.com> (raw)
In-Reply-To: <20070119211532.d47793b1.pj@sgi.com>

On Fri, 19 Jan 2007, Paul Jackson wrote:

> Christoph wrote:
> > + * Called without the tree_lock! So we may on rare occasions (when we race with
> > + * cpuset_clear_dirty_nodes()) follow the dirty_node pointer to random data.
> 
> Random is ok, on rate occassion, as you note.
> 
> But is there any chance you could follow it to a non-existent memory location
> and oops?  These long nodemasks are kmalloc/kfree'd, and I thought that once
> kfree'd, there was no guarantee that the stale address would even point to
> a mapped page of RAM.  This situation reminds me of the one that led to adding
> some RCU dependent code to kernel/cpuset.c.

This could become an issue if we implement memory unplug and then RCU 
locking could help. But right now that situation is only possible with 
memory mapped via page tables (vmalloc or user space pages). The slab 
allocator can currently only allocate from 1-1 mapped memory. So no danger 
there.



--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

  reply	other threads:[~2007-01-22 17:41 UTC|newest]

Thread overview: 15+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2007-01-20  3:10 [PATCH 0/5] Cpuset aware writeback V1 Christoph Lameter
2007-01-20  3:10 ` [PATCH 1/5] Add a map to to track dirty pages per node Christoph Lameter
2007-01-20  5:15   ` Paul Jackson
2007-01-22 17:41     ` Christoph Lameter [this message]
2007-01-22  1:31   ` David Chinner
2007-01-22 19:30     ` Christoph Lameter
2007-01-28 21:38       ` David Chinner
2007-01-29 16:50         ` Christoph Lameter
2007-01-20  3:10 ` [PATCH 2/5] Add a nodemask to pdflush functions Christoph Lameter
2007-01-20  3:10 ` [PATCH 3/5] Per cpuset dirty ratio calculation Christoph Lameter
2007-01-20  3:10 ` [PATCH 4/5] Cpuset aware writeback during reclaim Christoph Lameter
2007-01-20  3:10 ` [PATCH 5/5] Throttle vm writeout per cpuset Christoph Lameter
  -- strict thread matches above, loose matches on Subject: below --
2007-01-23 18:52 [PATCH 0/5] Cpuset aware writeback V2 Christoph Lameter
2007-01-23 18:52 ` [PATCH 1/5] Add a map to to track dirty pages per node Christoph Lameter
2007-01-25  3:04   ` Ethan Solomita
2007-01-25  5:52     ` Christoph Lameter

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=Pine.LNX.4.64.0701220939060.24578@schroedinger.engr.sgi.com \
    --to=clameter@sgi.com \
    --cc=a.p.zijlstra@chello.nl \
    --cc=ak@suse.de \
    --cc=akpm@osdl.org \
    --cc=dgc@sgi.com \
    --cc=linux-mm@kvack.org \
    --cc=menage@google.com \
    --cc=nickpiggin@yahoo.com.au \
    --cc=pj@sgi.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).