From: David Xu <davidxu06@gmail.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: xen_evtchn_do_upcall
Date: Wed, 24 Oct 2012 09:39:35 -0400 [thread overview]
Message-ID: <CAGjowiQOwtu9Jr7Gk9KZv1kANoVsKEYLcvt3CJg80wyfUJ2wnw@mail.gmail.com> (raw)
In-Reply-To: <1351070623.2237.120.camel@zakaz.uk.xensource.com>
[-- Attachment #1.1: Type: text/plain, Size: 10762 bytes --]
Hi Lan,
Thanks for your reply. I did a experiment as follows,
I assigned 2 vCPU to a VM, one vCPU (vCPU0) was pinned to a physical CPU
shared with several other VMs and the other vCPU (vCPU1) was pinned to an
idle physical CPU occupied by this VM only. Then in Guest OS I run iperf
server to measure its TCP receiving throughput. In order to shrink the
receiving delay caused by vCPU scheduling, I pin the IRQ context of NIC to
vCPU1 and run iperf server on the vCPU0. This method works well for UDP,
but does not work for TCP. I track the involved function by ftrace and get
the following results which contains lots of xen_evtchn_do_upcall routine.
What's the meaning of this process (xen_evtchn_do_upcall
=> handle_irq_event => xennet_tx_buf_gc => gnttab_query_foreign_access)?
1) | tcp_v4_rcv() {
1) 0.087 us | __inet_lookup_established();
1) | sk_filter() {
1) | security_sock_rcv_skb() {
1) 0.049 us | cap_socket_sock_rcv_skb();
1) 0.374 us | }
1) 0.708 us | }
1) | _raw_spin_lock() {
1) | xen_evtchn_do_upcall() {
1) 0.051 us | exit_idle();
1) | irq_enter() {
1) | rcu_irq_enter() {
1) 0.094 us | rcu_exit_nohz();
1) 0.432 us | }
1) 0.055 us | idle_cpu();
1) 1.166 us | }
1) | __xen_evtchn_do_upcall() {
1) 0.120 us | irq_to_desc();
1) | handle_edge_irq() {
1) 0.103 us | _raw_spin_lock();
1) | ack_dynirq() {
1) | evtchn_from_irq() {
1) | info_for_irq() {
1) | irq_get_irq_data() {
1) 0.051 us | irq_to_desc();
1) 0.400 us | }
1) 0.746 us | }
1) 1.074 us | }
1) 0.050 us | irq_move_irq();
1) 1.767 us | }
1) | handle_irq_event() {
1) 0.164 us | _raw_spin_unlock();
1) | handle_irq_event_percpu() {
1) | xennet_interrupt() {
1) 0.125 us | _raw_spin_lock_irqsave();
1) | xennet_tx_buf_gc() {
1) 0.082 us | gnttab_query_foreign_access();
1) 0.050 us | gnttab_end_foreign_access_ref();
1) 0.070 us | gnttab_release_grant_reference();
1) | dev_kfree_skb_irq() {
1) 0.061 us | raise_softirq_irqoff();
1) 0.460 us | }
1) 0.058 us | gnttab_query_foreign_access();
1) 0.050 us | gnttab_end_foreign_access_ref();
1) 0.050 us | gnttab_release_grant_reference();
1) | dev_kfree_skb_irq() {
1) 0.059 us | raise_softirq_irqoff();
1) 0.440 us | }
1) 3.710 us | }
1) 0.092 us | _raw_spin_unlock_irqrestore();
1) 4.845 us | }
1) 0.075 us | note_interrupt();
1) 5.567 us | }
1) 0.055 us | _raw_spin_lock();
1) 6.889 us | }
1) 0.080 us | _raw_spin_unlock();
1) + 10.081 us | }
1) + 10.965 us | }
1) | irq_exit() {
1) | rcu_irq_exit() {
1) 0.086 us | rcu_enter_nohz();
1) 0.424 us | }
1) 0.049 us | idle_cpu();
1) 1.094 us | }
1) + 14.555 us | }
1) 0.120 us | } /* _raw_spin_lock */
1) | __wake_up_sync_key() {
1) 0.099 us | _raw_spin_lock_irqsave();
1) | __wake_up_common() {
1) | autoremove_wake_function() {
1) | default_wake_function() {
1) | try_to_wake_up() {
1) 0.103 us | _raw_spin_lock_irqsave();
1) 0.078 us | task_waking_fair();
1) 0.102 us | select_task_rq_fair();
1) | xen_smp_send_reschedule() {
1) | xen_send_IPI_one() {
1) | notify_remote_via_irq() {
1) | evtchn_from_irq() {
1) | info_for_irq() {
1) | irq_get_irq_data() {
1) 0.067 us | irq_to_desc();
1) 0.396 us | }
1) 0.727 us | }
1) 1.055 us | }
1) 1.699 us | }
1) 2.048 us | }
1) 2.407 us | }
1) 0.066 us | ttwu_stat();
1) 0.114 us | _raw_spin_unlock_irqrestore();
1) 4.941 us | }
1) 5.294 us | }
1) 5.645 us | }
1) 6.023 us | }
1) 0.094 us | _raw_spin_unlock_irqrestore();
1) 7.156 us | }
1) 0.058 us | dst_metric();
1) | inet_csk_reset_xmit_timer.constprop.34() {
1) | sk_reset_timer() {
1) | mod_timer() {
1) | lock_timer_base.isra.30() {
1) 0.099 us | _raw_spin_lock_irqsave();
1) 0.436 us | }
1) 0.049 us | idle_cpu();
1) 0.116 us | _raw_spin_unlock();
1) 0.074 us | _raw_spin_lock();
1) 0.072 us | internal_add_timer();
1) 0.103 us | _raw_spin_unlock_irqrestore();
1) 2.673 us | }
1) 3.039 us | }
1) 3.397 us | }
1) 0.082 us | _raw_spin_unlock();
1) 0.061 us | sock_put();
1) + 48.704 us | }
When I run both process context of application ( e.g. iperf server ) and
IRQ context on vCPU1 which is the ''fast" core, no any xen_evtchn_do_upcall
routine found.
1) | tcp_v4_rcv() {
1) 0.081 us | __inet_lookup_established();
1) | sk_filter() {
1) | security_sock_rcv_skb() {
1) 0.059 us | cap_socket_sock_rcv_skb();
1) 0.542 us | }
1) 0.875 us | }
1) 0.060 us | _raw_spin_lock();
1) 0.117 us | _raw_spin_unlock();
1) 0.053 us | sock_put();
1) 2.703 us | }
Do you think these xen_evtchn_do_upcall routines are due to the
synchronization between process context and softirq context? Thanks.
Regards,
Cong
2012/10/24 Ian Campbell <Ian.Campbell@citrix.com>
> On Mon, 2012-10-22 at 02:51 +0100, David Xu wrote:
> > Hi,
> >
> >
> > Is anybody know the purpose of this method (xen_evtchn_do_upcall)?
>
> It is the callback used to inject event channels events (i.e. IRQs) into
> the guest. You would expect to see it at the base of any stack trace
> taken from interrupt context.
>
> > When I run a user level application involved in TCP receiving and the
> > SoftIRQ for eth0 on the same CPU core, everything is OK. But if I run
> > them on 2 different cores, there will be xen_evtchn_do_upcall()
> > existing (maybe when the local_bh_disable() or local_bh_enable() is
> > called)
>
> it would not be unusual to get an interrupt immediately after
> re-enabling interrupts.
>
> > in __inet_lookup_established() routine which costs longer time than
> > the first scenario. Is it due to the synchronization issue between
> > process context and softirq context? Thanks for any reply.
> >
> >
> > 1) | __inet_lookup_established() {
> > 1) | xen_evtchn_do_upcall() {
> > 1) 0.054 us | exit_idle();
> > 1) | irq_enter() {
> > 1) | rcu_irq_enter() {
> > 1) 0.102 us | rcu_exit_nohz();
> > 1) 0.431 us | }
> > 1) 0.064 us | idle_cpu();
> > 1) 1.152 us | }
> > 1) | __xen_evtchn_do_upcall() {
> > 1) 0.119 us | irq_to_desc();
> > 1) | handle_edge_irq() {
> > 1) 0.107 us | _raw_spin_lock();
> > 1) | ack_dynirq() {
> > 1) | evtchn_from_irq() {
> > 1) | info_for_irq() {
> > 1) | irq_get_irq_data() {
> > 1) 0.052 us | irq_to_desc();
> > 1) 0.418 us | }
> > 1) 0.782 us | }
> > 1) 1.135 us | }
> > 1) 0.049 us | irq_move_irq();
> > 1) 1.800 us | }
> > 1) | handle_irq_event() {
> > 1) 0.161 us | _raw_spin_unlock();
> > 1) | handle_irq_event_percpu() {
> > 1) | xennet_interrupt() {
> > 1) 0.125 us | _raw_spin_lock_irqsave();
> > 1) | xennet_tx_buf_gc() {
> > 1) 0.079 us | gnttab_query_foreign_access();
> > 1) 0.050 us | gnttab_end_foreign_access_ref();
> > 1) 0.069 us | gnttab_release_grant_reference();
> > 1) | dev_kfree_skb_irq() {
> > 1) 0.055 us | raise_softirq_irqoff();
> > 1) 0.472 us | }
> > 1) 0.049 us | gnttab_query_foreign_access();
> > 1) 0.058 us | gnttab_end_foreign_access_ref();
> > 1) 0.058 us | gnttab_release_grant_reference();
> > 1) | dev_kfree_skb_irq() {
> > 1) 0.050 us | raise_softirq_irqoff();
> > 1) 0.456 us | }
> > 1) 3.714 us | }
> > 1) 0.102 us | _raw_spin_unlock_irqrestore();
> > 1) 4.857 us | }
> > 1) 0.061 us | note_interrupt();
> > 1) 5.571 us | }
> > 1) 0.054 us | _raw_spin_lock();
> > 1) 6.707 us | }
> > 1) 0.083 us | _raw_spin_unlock();
> > 1) + 10.083 us | }
> > 1) + 10.985 us | }
> > 1) | irq_exit() {
> > 1) | rcu_irq_exit() {
> > 1) 0.087 us | rcu_enter_nohz();
> > 1) 0.429 us | }
> > 1) 0.049 us | idle_cpu();
> > 1) 1.088 us | }
> > 1) + 14.551 us | }
> > 1) 0.191 us | } /* __inet_lookup_established */
>
>
>
[-- Attachment #1.2: Type: text/html, Size: 13111 bytes --]
[-- Attachment #2: Type: text/plain, Size: 126 bytes --]
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel
next prev parent reply other threads:[~2012-10-24 13:39 UTC|newest]
Thread overview: 5+ messages / expand[flat|nested] mbox.gz Atom feed top
2012-10-22 1:51 xen_evtchn_do_upcall David Xu
2012-10-24 9:23 ` xen_evtchn_do_upcall Ian Campbell
2012-10-24 13:39 ` David Xu [this message]
2012-10-24 13:54 ` xen_evtchn_do_upcall Ian Campbell
2012-10-24 14:23 ` xen_evtchn_do_upcall David Xu
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=CAGjowiQOwtu9Jr7Gk9KZv1kANoVsKEYLcvt3CJg80wyfUJ2wnw@mail.gmail.com \
--to=davidxu06@gmail.com \
--cc=Ian.Campbell@citrix.com \
--cc=xen-devel@lists.xen.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).