xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: "Roger Pau Monné" <roger.pau@citrix.com>
To: George Dunlap <george.dunlap@eu.citrix.com>,
	Jan Beulich <JBeulich@suse.com>
Cc: Luwei Cheng <chengluwei@gmail.com>,
	xen-devel@lists.xenproject.org, wei.liu2@citrix.com,
	david.vrabel@citrix.com
Subject: Re: [PROPOSAL] Event channel for SMP-VMs: per-vCPU or per-OS?
Date: Tue, 29 Oct 2013 12:00:07 +0100	[thread overview]
Message-ID: <526F9537.10007@citrix.com> (raw)
In-Reply-To: <526F9389.1000504@eu.citrix.com>

On 29/10/13 11:52, George Dunlap wrote:
> On 10/29/2013 09:57 AM, Jan Beulich wrote:
>>>>> On 29.10.13 at 10:49, Luwei Cheng <chengluwei@gmail.com> wrote:
>>> On Tue, Oct 29, 2013 at 5:34 PM, Jan Beulich <JBeulich@suse.com> wrote:
>>>>>>> On 29.10.13 at 10:02, Luwei Cheng <chengluwei@gmail.com> wrote:
>>>>> On Tue, Oct 29, 2013 at 4:19 PM, Jan Beulich <JBeulich@suse.com>
>>>>> wrote:
>>>>>>>>> On 29.10.13 at 03:56, Luwei Cheng <chengluwei@gmail.com> wrote:
>>>>>>> Hmm.. though all vCPUs can serve the events, the hypervisor
>>>>>>> delivers the
>>>>>>> event to only "one" vCPU at at time, so only that vCPU can see
>>>>>>> this event.
>>>>>>> Analytically no race condition will be introduced.
>>>>>>
>>>>>> No - an event is globally pending (at least in the old model, the
>>>>>> situation is better with the new FIFO model), i.e. if more than
>>>>>> one of the guest's vCPU-s allowed to service it would be looking
>>>>>> at it simultaneously, they'd still need to arbitrate which one
>>>>>> ought to handle it.
>>>>>>
>>>>>> So your proposed extension might need to be limited to the
>>>>>> FIFO model.
>>>>>
>>>>> Thanks for your reply. Yes, you are right. My prior description was
>>>>> incorrect.
>>>>> When there are more than one vCPUs picking the event, even without
>>>>> arbitrary, will it cause "correctness" problem? After the event is
>>>> served by
>>>>> the first entered vCPU, and the rest vCPUs just have noting to do
>>>>> in the
>>>>> event handler (no much harm).
>>>>
>>>> That really depends on the handler. Plus it might be a performance
>>>> and/or latency issue to run handlers that don't need to be run.
>>>
>>> I think the situation is much like IO-APIC routing in physical SMP
>>> systems:
>>
>> Indeed, yet you draw the wrong conclusion.
>>
>>> in logical destination mode, all processors can serve I/O interrupts.
>>
>> But only one gets delivered any individual instance - there is
>> arbitration being done in hardware.
> 
> Xen should be able to arbitrate which one gets the actual event
> delivery, right?  So the only risk would be that another vcpu would
> notice the pending interrupt and handle it itself.

If events are no longer assigned to a single CPU there's no guarantee
that the CPU you deliver the event to is the one that's actually going
to handle it, another CPU might be already in the event channel upcall
and stole it from under your feet (or event worse, the event could be
fired on several CPUs at the same time, at least with the current
implementation).

  reply	other threads:[~2013-10-29 11:00 UTC|newest]

Thread overview: 26+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2013-10-28 15:26 [PROPOSAL] Event channel for SMP-VMs: per-vCPU or per-OS? Luwei Cheng
2013-10-28 15:51 ` Roger Pau Monné
2013-10-29  2:56   ` Luwei Cheng
2013-10-29  8:19     ` Jan Beulich
2013-10-29  9:02       ` Luwei Cheng
2013-10-29  9:34         ` Jan Beulich
2013-10-29  9:49           ` Luwei Cheng
2013-10-29  9:57             ` Jan Beulich
2013-10-29 10:52               ` George Dunlap
2013-10-29 11:00                 ` Roger Pau Monné [this message]
2013-10-29 14:20                   ` Luwei Cheng
2013-10-29 14:30                     ` Wei Liu
2013-10-29 14:43                       ` Luwei Cheng
2013-10-29 15:25                         ` Wei Liu
2013-10-30  7:40                           ` Luwei Cheng
2013-10-30 10:27                             ` Wei Liu
2013-10-29 11:22                 ` Jan Beulich
2013-10-29 14:28                   ` Luwei Cheng
2013-10-29 14:42                     ` Jan Beulich
2013-10-29 15:20                       ` Luwei Cheng
2013-10-29 16:37                         ` Jan Beulich
2013-10-29 15:21 ` David Vrabel
2013-10-30  7:35   ` Luwei Cheng
2013-10-30  8:45     ` Roger Pau Monné
2013-10-30  8:45     ` Roger Pau Monné
2013-10-30 13:11       ` Luwei Cheng

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=526F9537.10007@citrix.com \
    --to=roger.pau@citrix.com \
    --cc=JBeulich@suse.com \
    --cc=chengluwei@gmail.com \
    --cc=david.vrabel@citrix.com \
    --cc=george.dunlap@eu.citrix.com \
    --cc=wei.liu2@citrix.com \
    --cc=xen-devel@lists.xenproject.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).