kvm.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Avi Kivity <avi@redhat.com>
To: Tomoki Sekiyama <tomoki.sekiyama.qu@hitachi.com>
Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org,
	x86@kernel.org, yrl.pp-manager.tt@hitachi.com,
	Marcelo Tosatti <mtosatti@redhat.com>,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>, "H. Peter Anvin" <hpa@zytor.com>
Subject: Re: [RFC PATCH 06/18] KVM: Add facility to run guests on slave CPUs
Date: Thu, 28 Jun 2012 20:02:26 +0300	[thread overview]
Message-ID: <4FEC8E22.4000208@redhat.com> (raw)
In-Reply-To: <20120628060751.19298.39801.stgit@localhost.localdomain>

On 06/28/2012 09:07 AM, Tomoki Sekiyama wrote:
> Add path to migrate execution of vcpu_enter_guest to a slave CPU when
> vcpu->arch.slave_cpu is set.
> 
> After moving to the slave CPU, it goes back to the online CPU when the
> guest is exited by reasons that cannot be handled by the slave CPU only
> (e.g. handling async page faults).

What about, say, instruction emulation?  It may need to touch guest
memory, which cannot be done from interrupt disabled context.

> +
> +static int vcpu_post_run(struct kvm_vcpu *vcpu, struct task_struct *task,
> +			 int *can_complete_async_pf)
> +{
> +	int r = LOOP_ONLINE;
> +
> +	clear_bit(KVM_REQ_PENDING_TIMER, &vcpu->requests);
> +	if (kvm_cpu_has_pending_timer(vcpu))
> +		kvm_inject_pending_timer_irqs(vcpu);
> +
> +	if (dm_request_for_irq_injection(vcpu)) {
> +		r = -EINTR;
> +		vcpu->run->exit_reason = KVM_EXIT_INTR;
> +		++vcpu->stat.request_irq_exits;
> +	}
> +
> +	if (can_complete_async_pf) {
> +		*can_complete_async_pf = kvm_can_complete_async_pf(vcpu);
> +		if (r == LOOP_ONLINE)
> +			r = *can_complete_async_pf ? LOOP_APF : LOOP_SLAVE;
> +	} else
> +		kvm_check_async_pf_completion(vcpu);
> +
> +	if (signal_pending(task)) {
> +		r = -EINTR;
> +		vcpu->run->exit_reason = KVM_EXIT_INTR;
> +		++vcpu->stat.signal_exits;
> +	}

Isn't this racy?  The signal can come right after this.

-- 
error compiling committee.c: too many arguments to function

  reply	other threads:[~2012-06-28 17:02 UTC|newest]

Thread overview: 33+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2012-06-28  6:07 [RFC PATCH 00/18] KVM: x86: CPU isolation and direct interrupts handling by guests Tomoki Sekiyama
2012-06-28  6:07 ` [RFC PATCH 01/18] x86: Split memory hotplug function from cpu_up() as cpu_memory_up() Tomoki Sekiyama
2012-06-28  6:07 ` [RFC PATCH 02/18] x86: Add a facility to use offlined CPUs as slave CPUs Tomoki Sekiyama
2012-06-28  6:07 ` [RFC PATCH 03/18] x86: Support hrtimer on " Tomoki Sekiyama
2012-06-28  6:07 ` [RFC PATCH 04/18] KVM: Replace local_irq_disable/enable with local_irq_save/restore Tomoki Sekiyama
2012-06-28  6:07 ` [RFC PATCH 05/18] KVM: Enable/Disable virtualization on slave CPUs are activated/dying Tomoki Sekiyama
2012-06-28  6:07 ` [RFC PATCH 06/18] KVM: Add facility to run guests on slave CPUs Tomoki Sekiyama
2012-06-28 17:02   ` Avi Kivity [this message]
2012-06-29  9:26     ` Tomoki Sekiyama
2012-06-28  6:07 ` [RFC PATCH 07/18] KVM: handle page faults occured in slave CPUs on online CPUs Tomoki Sekiyama
2012-06-28  6:08 ` [RFC PATCH 08/18] KVM: Add KVM_GET_SLAVE_CPU and KVM_SET_SLAVE_CPU to vCPU ioctl Tomoki Sekiyama
2012-06-28  6:08 ` [RFC PATCH 09/18] KVM: Go back to online CPU on VM exit by external interrupt Tomoki Sekiyama
2012-06-28  6:08 ` [RFC PATCH 10/18] KVM: proxy slab operations for slave CPUs on online CPUs Tomoki Sekiyama
2012-06-28  6:08 ` [RFC PATCH 11/18] KVM: no exiting from guest when slave CPU halted Tomoki Sekiyama
2012-06-28  6:08 ` [RFC PATCH 12/18] x86/apic: Enable external interrupt routing to slave CPUs Tomoki Sekiyama
2012-06-28  6:08 ` [RFC PATCH 13/18] x86/apic: IRQ vector remapping on slave for " Tomoki Sekiyama
2012-06-28  6:08 ` [RFC PATCH 14/18] KVM: Directly handle interrupts by guests without VM EXIT on " Tomoki Sekiyama
2012-06-28  6:08 ` [RFC PATCH 15/18] KVM: vmx: Add definitions PIN_BASED_PREEMPTION_TIMER Tomoki Sekiyama
2012-06-28  6:08 ` [RFC PATCH 16/18] KVM: add kvm_arch_vcpu_prevent_run to prevent VM ENTER when NMI is received Tomoki Sekiyama
2012-06-28 16:48   ` Avi Kivity
2012-06-29  9:26     ` Tomoki Sekiyama
2012-06-28  6:08 ` [RFC PATCH 17/18] KVM: route assigned devices' MSI/MSI-X directly to guests on slave CPUs Tomoki Sekiyama
2012-06-28  6:08 ` [RFC PATCH 18/18] x86: request TLB flush to slave CPU using NMI Tomoki Sekiyama
2012-06-28 16:38   ` Avi Kivity
2012-06-29  9:26     ` Tomoki Sekiyama
2012-06-28 16:58 ` [RFC PATCH 00/18] KVM: x86: CPU isolation and direct interrupts handling by guests Avi Kivity
2012-06-28 17:26   ` Jan Kiszka
2012-06-28 17:34     ` Avi Kivity
2012-06-29  9:25       ` Tomoki Sekiyama
2012-06-29 14:56         ` Avi Kivity
2012-07-06 10:33           ` Tomoki Sekiyama
2012-07-12  9:04             ` Avi Kivity
2012-07-04  9:33 ` Tomoki Sekiyama

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=4FEC8E22.4000208@redhat.com \
    --to=avi@redhat.com \
    --cc=hpa@zytor.com \
    --cc=kvm@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mingo@redhat.com \
    --cc=mtosatti@redhat.com \
    --cc=tglx@linutronix.de \
    --cc=tomoki.sekiyama.qu@hitachi.com \
    --cc=x86@kernel.org \
    --cc=yrl.pp-manager.tt@hitachi.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).