linuxppc-dev.lists.ozlabs.org archive mirror
 help / color / mirror / Atom feed
From: Michael Neuling <mikey@neuling.org>
To: michael@ellerman.id.au
Cc: kexec@lists.infradead.org, Anton Blanchard <anton@samba.org>,
	linuxppc-dev@ozlabs.org
Subject: Re: [PATCH 2/2] powerpc,kexec: Speedup kexec hpte tear down
Date: Wed, 12 May 2010 10:43:08 +1000	[thread overview]
Message-ID: <7247.1273624988@neuling.org> (raw)
In-Reply-To: <1273624565.5738.8.camel@concordia>



In message <1273624565.5738.8.camel@concordia> you wrote:
> 
> --=-wnrJa93KBardFtse2eHB
> Content-Type: text/plain; charset="UTF-8"
> Content-Transfer-Encoding: quoted-printable
> 
> On Wed, 2010-05-12 at 09:29 +1000, Michael Neuling wrote:
> >=20
> > In message <1273561463.9209.138.camel@concordia> you wrote:
> > >=20
> > > --=3D-S056dRzmrEHDBzKyyTOs
> > > Content-Type: text/plain; charset=3D"UTF-8"
> > > Content-Transfer-Encoding: quoted-printable
> > >=20
> > > On Tue, 2010-05-11 at 16:28 +1000, Michael Neuling wrote:
> > > > Currently for kexec the PTE tear down on 1TB segment systems normally
> > > > requires 3 hcalls for each PTE removal. On a machine with 32GB of
> > > > memory it can take around a minute to remove all the PTEs.
> > > >=3D20
> > > ..
> > > > -	/* TODO: Use bulk call */
> > >=20
> > > ...
> > > > +	/* Read in batches of 4,
> > > > +	 * invalidate only valid entries not in the VRMA
> > > > +	 * hpte_count will be a multiple of 4
> > > > +         */
> > > > +	for (i =3D3D 0; i < hpte_count; i +=3D3D 4) {
> > > > +		lpar_rc =3D3D plpar_pte_read_4_raw(0, i, (void *)ptes);
> > > > +		if (lpar_rc !=3D3D H_SUCCESS)
> > > > +			continue;
> > > > +		for (j =3D3D 0; j < 4; j++){
> > > > +			if ((ptes[j].pteh & HPTE_V_VRMA_MASK) =3D3D=3D3
D
> > > > +				HPTE_V_VRMA_MASK)
> > > > +				continue;
> > > > +			if (ptes[j].pteh & HPTE_V_VALID)
> > > > +				plpar_pte_remove_raw(0, i + j, 0,
> > > > +					&(ptes[j].pteh), &(ptes[j].ptel
));
> > > >  		}
> > >=20
> > > Have you tried using the bulk remove call, if none of the HPTEs are for
> > > the VRMA? Rumour was it was slower/the-same, but that may have been
> > > apocryphal.
> >=20
> > No, I didn't try it.
> >=20
> > I think the real solution is to ask FW for a new call to do it all for
> > us.
> 
> Sure, you could theoretically still get a 4x speedup though by using the
> bulk remove.

We probably only do the remove on < 1% of the hptes now.  So I doubt we
would get a speedup since most of the time we aren't do the remove
anymore.

Mikey

  reply	other threads:[~2010-05-12  0:43 UTC|newest]

Thread overview: 9+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2010-05-11  6:28 [PATCH 1/2] powerpc: Add hcall to read 4 ptes at a time in real mode Michael Neuling
2010-05-11  6:28 ` [PATCH 2/2] powerpc,kexec: Speedup kexec hpte tear down Michael Neuling
2010-05-11  7:04   ` Michael Ellerman
2010-05-11 23:29     ` Michael Neuling
2010-05-12  0:36       ` Michael Ellerman
2010-05-12  0:43         ` Michael Neuling [this message]
2010-05-12  1:00           ` Paul Mackerras
2010-05-12  1:06             ` Michael Neuling
2010-05-12  1:36               ` Michael Ellerman

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=7247.1273624988@neuling.org \
    --to=mikey@neuling.org \
    --cc=anton@samba.org \
    --cc=kexec@lists.infradead.org \
    --cc=linuxppc-dev@ozlabs.org \
    --cc=michael@ellerman.id.au \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).