From mboxrd@z Thu Jan 1 00:00:00 1970 From: Keir Fraser Subject: Re: Hypercall continuation and wait_event Date: Tue, 10 Apr 2012 08:37:41 +0100 Message-ID: References: <1334006386.85318.YahooMailNeo@web124506.mail.ne1.yahoo.com> Mime-Version: 1.0 Content-Type: text/plain; charset="iso-8859-1" Content-Transfer-Encoding: quoted-printable Return-path: In-Reply-To: <1334006386.85318.YahooMailNeo@web124506.mail.ne1.yahoo.com> List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Sender: xen-devel-bounces@lists.xen.org Errors-To: xen-devel-bounces@lists.xen.org To: Ruslan Nikolaev , "xen-devel@lists.xen.org" List-Id: xen-devel@lists.xenproject.org Not sure. Did you snip some lines from the call trace that might explain why the call trace is being generated (e.g., watchdog timeout, page fault, ...)? >>From the lines you provide, we can't even tell which vcpu it is that is dumping the call trace. -- Keir On 09/04/2012 22:19, "Ruslan Nikolaev" wrote: > Keir, > = > Thanks again! When I used the scheme I have described, I periodically rec= eive > kernel errors as shown below. Notice that I use HVM domain and also 'isol= cpus' > as a Linux kernel option to prevent a dedicated VCPU from being normally = used. > A hypercall is being made from a special kernel thread (which is bind to = the > dedicated VCPU before the call). > = > What could be the reason of these messages? Looks like it is something re= lated > to a timer. > = > = > [ 1039.319957] RIP: 0010:[]=A0 [] > default_send_IPI_mask_sequence_phys+0x95/0xce > [ 1039.319957] RSP: 0018:ffff88007f043c28=A0 EFLAGS: 00000046 > [ 1039.319957] RAX: 0000000000000400 RBX: 0000000000000096 RCX: > 0000000000000020 > [ 1039.319957] RDX: 0000000000000002 RSI: 0000000000000020 RDI: > 0000000000000300 > [ 1039.319957] RBP: ffff88007f043c68 R08: 0000000000000000 R09: > ffffffff8163eb20 > [ 1039.319957] R10: ffff8800ff043bad R11: 0000000000000000 R12: > 000000000000d602 > [ 1039.319957] R13: 0000000000000002 R14: 0000000000000400 R15: > ffffffff8163eb20 > [ 1039.319957] FS:=A0 0000000000000000(0000) GS:ffff88007f040000(0000) > knlGS:0000000000000000 > [ 1039.319957] CS:=A0 0010 DS: 0000 ES: 0000 CR0: 000000008005003b > [ 1039.319957] CR2: 00007f74195d29be CR3: 000000007af4d000 CR4: > 00000000000006a0 > [ 1039.319957] DR0: 0000000000000000 DR1: 0000000000000000 DR2: > 0000000000000000 > [ 1039.319957] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: > 0000000000000400 > [ 1039.319957] Process swapper/2 (pid: 0, threadinfo ffff88007c4ec000, ta= sk > ffff88007c4f1650) > [ 1039.319957] Stack: > [ 1039.319957]=A0 0000000000000002 0000000400000008 ffff88007f043c88 > 0000000000002710 > [ 1039.319957]=A0 ffffffff8161a280 ffffffff8161a340 0000000000000001 > ffffffff8161a4c0 > [ 1039.319957]=A0 ffff88007f043c78 ffffffff8101ecc6 ffff88007f043c98 > ffffffff8101bb81 > [ 1039.319957] Call Trace: > [ 1039.319957]=A0 > [ 1039.319957]=A0 [] physflat_send_IPI_all+0x12/0x14 > [ 1039.319957]=A0 [] arch_trigger_all_cpu_backtrace+0x4= b/0x6e > [ 1039.319957]=A0 [] __rcu_pending+0x224/0x347 > [ 1039.319957]=A0 [] rcu_check_callbacks+0xa2/0xb4 > [ 1039.319957]=A0 [] update_process_times+0x3a/0x70 > [ 1039.319957]=A0 [] tick_sched_timer+0x70/0x9a > [ 1039.319957]=A0 [] __run_hrtimer.isra.26+0x75/0xce > [ 1039.319957]=A0 [] hrtimer_interrupt+0xd7/0x193 > [ 1039.319957]=A0 [] xen_timer_interrupt+0x2f/0x155 > [ 1039.319957]=A0 [] ? pvclock_clocksource_read+0x48/0x= b4 > [ 1039.319957]=A0 [] ? pvclock_clocksource_read+0x48/0x= b4 > [ 1039.319957]=A0 [] ? pvclock_clocksource_read+0x48/0x= b4 > [ 1039.319957]=A0 [] handle_irq_event_percpu+0x29/0x126 > [ 1039.319957]=A0 [] ? info_for_irq+0x9/0x19 > [ 1039.319957]=A0 [] handle_percpu_irq+0x39/0x4d > [ 1039.319957]=A0 [] __xen_evtchn_do_upcall+0x147/0x1df > [ 1039.319957]=A0 [] xen_evtchn_do_upcall+0x27/0x39 > [ 1039.319957]=A0 [] xen_hvm_callback_vector+0x6e/0x80 > [ 1039.319957]=A0 > [ 1039.319957]=A0 [] ? rcu_needs_cpu+0x110/0x1c1 > [ 1039.319957]=A0 [] ? native_safe_halt+0x6/0x8 > [ 1039.319957]=A0 [] default_idle+0x27/0x44 > [ 1039.319957]=A0 [] cpu_idle+0x66/0xa4 > [ 1039.319957]=A0 [] start_secondary+0x1ac/0x1b1 > = > = > = > Thanks, > Ruslan > = > = > ----- Original Message ----- > From: Keir Fraser > To: Ruslan Nikolaev ; "xen-devel@lists.xen.org" > > Cc: = > Sent: Monday, April 9, 2012 8:58 PM > Subject: Re: [Xen-devel] Hypercall continuation and wait_event > = > It means the vcpu has an interrupt pending (in the pv case, that means an > event channel has a pending event). > = > = > On 09/04/2012 21:16, "Ruslan Nikolaev" wrote: > = >> Keir, >> = >> Thanks for your replies! Just one more question about >> local_event_need_delivery(). Under what (common) conditions I would expe= ct to >> have local events that need delivery? >> = >> Ruslan >> = >> = >> = >> ----- Original Message ----- >> From: Keir Fraser >> To: Ruslan Nikolaev ; "xen-devel@lists.xen.org" >> >> Cc: = >> Sent: Monday, April 9, 2012 8:09 PM >> Subject: Re: [Xen-devel] Hypercall continuation and wait_event >> = >> On 09/04/2012 20:18, "Ruslan Nikolaev" wrote: >> = >>> Thanks for the reply. >>> = >>> Since it can take arbitrarily long for an event to arrive (e.g., it is >>> coming >>> from a different guest on a user request), how do I need to handle this >>> case?Does it mean that I only need to make sure that nothings get sched= uled >>> on >>> this VCPU in the guest? >> = >> Nothing else *can* get scheduled on this VCPU in the guest. The VCPU will >> sleep within wait_event within the hypercall context. Hence you must not >> hold any hypervisor spinlocks either, for example. >> = >>> Also, it is not exactly clear to me how wait_event avoids the need for >>> hypercall continuation. What about local_events_need_delivery() or >>> softirq_pending()? Are they going to be handled by wait_event internall= y? >> = >> Your VCPU gets descheduled. Hence softirq_pending() is not your concern = for >> the duration that you're descheduled. And if local_event_need_delivery(), >> that's too bad, they have to wait for the vcpu to wake up on the event. >> = >> -- Keir >> = >>> Ruslan >>> = >>> = >>> = >>> = >>> = >>> = >>> ----- Original Message ----- >>> From: Keir Fraser >>> To: Ruslan Nikolaev ; "xen-devel@lists.xen.org" >>> >>> Cc: = >>> Sent: Monday, April 9, 2012 6:54 PM >>> Subject: Re: [Xen-devel] Hypercall continuation and wait_event >>> = >>> On 09/04/2012 18:51, "Ruslan Nikolaev" wrote: >>> = >>>> Hi >>>> = >>>> I am curious how I can properly support hypercall continuation and >>>> wait_event. >>>> I have a dedicated VCPU in a domain which makes a special hypercall, a= nd >>>> the >>>> hypercall waits for certain event to arrive. I am using queues availab= le in >>>> Xen, so wait_event will be invoked in the hypercall once its ready to >>>> accept >>>> events. However, my understanding that even though I have a dedicated = VCPU >>>> for >>>> this hypercall, I still may need to support hypercall continuation >>>> properly. >>>> (Is this the case?) So, my question is how exactly the need for hyperc= all >>> = >>> No it's not the case, the old hypercall_create_continuation() mechanism= does >>> not need to be used with wait_event(). >>> = >>> -- Keir >>> = >>>> preemption may affect wait_event() and wait() operations, and where wo= uld I >>>> need to do hypercall_preempt_check()? >>>> = >>>> Thank you! >>>> Ruslan >>>> = >>>> = >>>> _______________________________________________ >>>> Xen-devel mailing list >>>> Xen-devel@lists.xen.org >>>> http://lists.xen.org/xen-devel >>> = >>> _______________________________________________ >>> Xen-devel mailing list >>> Xen-devel@lists.xen.org >>> http://lists.xen.org/xen-devel >> = >> = >> = >> _______________________________________________ >> Xen-devel mailing list >> Xen-devel@lists.xen.org >> http://lists.xen.org/xen-devel >> = >> = >> _______________________________________________ >> Xen-devel mailing list >> Xen-devel@lists.xen.org >> http://lists.xen.org/xen-devel > = > = > = > _______________________________________________ > Xen-devel mailing list > Xen-devel@lists.xen.org > http://lists.xen.org/xen-devel > = > = > _______________________________________________ > Xen-devel mailing list > Xen-devel@lists.xen.org > http://lists.xen.org/xen-devel