linuxppc-dev.lists.ozlabs.org archive mirror
 help / color / mirror / Atom feed
From: Michael Neuling <mikey@neuling.org>
To: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: linuxppc-dev@ozlabs.org, Paul Mackerras <paulus@samba.org>
Subject: Re: [PATCH] fixes for the SLB shadow buffer
Date: Thu, 02 Aug 2007 09:32:12 +1000	[thread overview]
Message-ID: <1491.1186011132@neuling.org> (raw)
In-Reply-To: <1186007636.5495.536.camel@localhost.localdomain>

> On Wed, 2007-08-01 at 16:02 +1000, Michael Neuling wrote:
> > We sometimes change the vmalloc segment in slb_flush_and_rebolt but we
> > never updated with slb shadow buffer.  This fixes it.  Thanks to paulus
> > for finding this.
> > 
> > Also added some write barriers to ensure the shadow buffer is always
> > valid.
> 
> The shadow is global or per-cpu ?
> 
> Because in the later case, I think you need more than that.

It's per CPU.

> > @@ -759,6 +762,9 @@ int hash_page(unsigned long ea, unsigned
> >  		   mmu_psize_defs[mmu_vmalloc_psize].sllp) {
> >  		get_paca()->vmalloc_sllp =
> >  			mmu_psize_defs[mmu_vmalloc_psize].sllp;
> > +		vflags = SLB_VSID_KERNEL |
> > +			mmu_psize_defs[mmu_vmalloc_psize].sllp;
> > +		slb_shadow_update(VMALLOC_START, vflags, 1);
> >  		slb_flush_and_rebolt();
> >  	}
> 
> Later on:
> 
>         } else if (get_paca()->vmalloc_sllp !=
>                    mmu_psize_defs[mmu_vmalloc_psize].sllp) {
>                 get_paca()->vmalloc_sllp =
>                         mmu_psize_defs[mmu_vmalloc_psize].sllp;
>                 slb_flush_and_rebolt();
>         }
> 
> If your shadow is per-cpu, you need to fix that up too.

I'm confused... isn't that the same section of code?

> I'm tempted to think you should just expose an slb_vmalloc_update()
> from slb.c that does the shadow update and calls flush_and_rebolt.
> That would also get rid of your ifdef on vflags definition (which
> wasn't necessary in the first place if you had put it inside the
> if statement anyway).

OK, I'll create an slb_vmalloc_update for the next rev.

Mikey

  reply	other threads:[~2007-08-01 23:32 UTC|newest]

Thread overview: 17+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2007-08-01  4:56 [PATCH] fixes for the SLB shadow buffer Michael Neuling
2007-08-01  5:28 ` Paul Mackerras
2007-08-01  6:02   ` Michael Neuling
2007-08-01 21:48     ` Will Schmidt
2007-08-02  5:56       ` Michael Neuling
2007-08-02  7:31         ` Benjamin Herrenschmidt
2007-08-02  8:56           ` Michael Neuling
2007-08-02  8:58             ` Benjamin Herrenschmidt
2007-08-02  9:03               ` Michael Neuling
2007-08-02  9:14                 ` Benjamin Herrenschmidt
2007-08-02  9:28                   ` Michael Neuling
2007-08-03  1:55                     ` Michael Neuling
2007-08-03  2:50                       ` Benjamin Herrenschmidt
2007-08-01 22:33     ` Benjamin Herrenschmidt
2007-08-01 23:32       ` Michael Neuling [this message]
2007-08-02  0:05         ` Benjamin Herrenschmidt
2007-08-02  1:04           ` Benjamin Herrenschmidt

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1491.1186011132@neuling.org \
    --to=mikey@neuling.org \
    --cc=benh@kernel.crashing.org \
    --cc=linuxppc-dev@ozlabs.org \
    --cc=paulus@samba.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).