virtualization.lists.linux-foundation.org archive mirror
 help / color / mirror / Atom feed
From: Rusty Russell <rusty@rustcorp.com.au>
To: Jeremy Fitzhardinge <jeremy@goop.org>
Cc: Virtualization Mailing List <virtualization@lists.osdl.org>,
	Linux Kernel Mailing List <linux-kernel@vger.kernel.org>,
	Andi Kleen <ak@suse.de>, Zachary Amsden <zach@vmware.com>,
	Avi Kivity <avi@qumranet.com>,
	Anthony Liguori <anthony@codemonkey.ws>,
	Glauber de Oliveira Costa <glommer@gmail.com>,
	"Nakajima, Jun" <jun.nakajima@intel.com>
Subject: Re: [PATCH RFC] paravirt: cleanup lazy mode handling
Date: Tue, 02 Oct 2007 11:34:17 +1000	[thread overview]
Message-ID: <1191288858.6979.20.camel@localhost.localdomain> (raw)
In-Reply-To: <470186C4.5080208@goop.org>

On Mon, 2007-10-01 at 16:46 -0700, Jeremy Fitzhardinge wrote:
> This patch removes the set_lazy_mode operation, and adds "enter" and
> "leave" lazy mode operations on mmu_ops and cpu_ops.  All the logic
> associated with enter and leaving lazy states is now in common code
> (basically BUG_ONs to make sure that no mode is current when entering
> a lazy mode, and make sure that the mode is current when leaving).
> Also, flush is handled in a common way, by simply leaving and
> re-entering the lazy mode.

That's good, but this code does lose on native because we no longer
simply replace the entire thing with noops.

Perhaps inverting this and having (inline) helpers is the way to go?
I'm thinking something like:

static inline void paravirt_enter_lazy(enum paravirt_lazy_mode mode)
{
	BUG_ON(x86_read_percpu(paravirt_lazy_mode) != PARAVIRT_LAZY_NONE);
	BUG_ON(preemptible());

	x86_write_percpu(paravirt_lazy_mode, mode);
}

static inline void paravirt_exit_lazy(enum paravirt_lazy_mode mode)
{
	BUG_ON(x86_read_percpu(paravirt_lazy_mode) != mode);
	BUG_ON(preemptible());

	x86_write_percpu(paravirt_lazy_mode, PARAVIRT_LAZY_NONE);
}

The only trick would be that the flushes are so rarely required it's
probably worth putting the unlikely() in the top level:

static void arch_flush_lazy_cpu_mode(void)
{
	if (unlikely(x86_read_percpu(paravirt_lazy_mode)) {
		PVOP_VCALL0(cpu_ops.enter_lazy);
		PVOP_VCALL0(cpu_ops.exit_lazy);
	}
}

static void arch_flush_lazy_mmy_mode(void)
{
	if (unlikely(x86_read_percpu(paravirt_lazy_mode)) {
		PVOP_VCALL0(mmu_ops.enter_lazy);
		PVOP_VCALL0(mmu_ops.exit_lazy);
	}
}

Thoughts?
Rusty.

  reply	other threads:[~2007-10-02  1:34 UTC|newest]

Thread overview: 7+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2007-10-01 23:46 [PATCH RFC] paravirt: cleanup lazy mode handling Jeremy Fitzhardinge
2007-10-02  1:34 ` Rusty Russell [this message]
2007-10-02  6:29   ` Jeremy Fitzhardinge
2007-10-02  7:53     ` Rusty Russell
2007-10-02 22:43       ` Jeremy Fitzhardinge
2007-10-02  5:48 ` Avi Kivity
2007-10-02  6:24   ` Jeremy Fitzhardinge

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1191288858.6979.20.camel@localhost.localdomain \
    --to=rusty@rustcorp.com.au \
    --cc=ak@suse.de \
    --cc=anthony@codemonkey.ws \
    --cc=avi@qumranet.com \
    --cc=glommer@gmail.com \
    --cc=jeremy@goop.org \
    --cc=jun.nakajima@intel.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=virtualization@lists.osdl.org \
    --cc=zach@vmware.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).