xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: David Xu <davidxu06@gmail.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: xen_evtchn_do_upcall
Date: Wed, 24 Oct 2012 10:23:01 -0400	[thread overview]
Message-ID: <CAGjowiTDPreE7RadXEMJ+017Ve7jfQ3oBZ5CGp5uksqN_TK01g@mail.gmail.com> (raw)
In-Reply-To: <1351086882.18035.45.camel@zakaz.uk.xensource.com>


[-- Attachment #1.1: Type: text/plain, Size: 3160 bytes --]

Hi Lan,

I am trying to improve the TCP/UDP performance in VM. Due the the vCPU
scheduling on a pCPU sharing platform, the TCP receiving delay will be
significant which hurt the TCP/UDP throughput. So I want to offload the
softIRQ context to another Idle pCPU which I call fast-tick CPU. The packet
receiving process is like this: IRQ routine can continue picking packet
from ring buffer and put it to TCP receive buffer ( receive_queue,
prequeue, backlog_queue ) in kernel no matter whether the user process on
another CPU shared with other VMs is running or not. Once the vCPU holding
the user process gets scheduled, user process will fetch all packets from
receive buffer in kernel, which can improve the throughput. This works well
for UDP unfortunately does not work for TCP currently.

I found those  xen_evtchn_do_upcall routines existing when irq context try
to get the spinlock on the socket (Of course, they may happen in other
paths). If this spinlock is held by process context, irq context has to
spin on it and can not put any packet to receive buffer in time. So I doubt
these   xen_evtchn_do_upcall routine are due to the synchronization between
process context and irq context. Since I run process context and irq
context on 2 different vCPU, when they try to get the spinlock on the same
socket there will be interrupts between 2 vCPU which are implemented by
event in Xen.

If there is any error in my description, please correct me. Thanks.

Regards,
Cong

2012/10/24 Ian Campbell <Ian.Campbell@citrix.com>

> On Wed, 2012-10-24 at 14:39 +0100, David Xu wrote:
> > Hi Lan,
> >
> >
> > Thanks for your reply. I did a experiment as follows,
> > I assigned 2 vCPU to a VM, one vCPU (vCPU0) was pinned to a physical
> > CPU shared with several other VMs and the other vCPU (vCPU1) was
> > pinned to an idle physical CPU occupied by this VM only. Then in Guest
> > OS I run iperf server to measure its TCP receiving throughput. In
> > order to shrink the receiving delay caused by vCPU scheduling, I pin
> > the IRQ context of NIC to vCPU1 and run iperf server on the vCPU0.
> > This method works well for UDP, but does not work for TCP. I track the
> > involved function by ftrace and get the following results which
> > contains lots of xen_evtchn_do_upcall routine. What's the meaning of
> > this process (xen_evtchn_do_upcall => handle_irq_event =>
> > xennet_tx_buf_gc =>  gnttab_query_foreign_access)?
>
> Have you looked at the code for any of those functions?
>
> If you had done you'd find it is pretty obviously an interrupt being
> delivered to the network device and the associated work to satisfy that
> interrupt.
>
> It doesn't seem that surprising that an iperf test should involve lots
> of network interrupts.
>
> It's not entirely clear to me what you are expecting to find and/or what
> you are trying to prove.
>
> > When I run both process context of application ( e.g. iperf server )
> > and IRQ context on vCPU1 which is the ''fast" core, no
> > any xen_evtchn_do_upcall routine found.
>
> Perhaps on the fast core NAPI is able to kick in and therefore the NIC
> becomes polled instead of interrupt driven?
>
> Ian.
>
>

[-- Attachment #1.2: Type: text/html, Size: 3837 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

      reply	other threads:[~2012-10-24 14:23 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2012-10-22  1:51 xen_evtchn_do_upcall David Xu
2012-10-24  9:23 ` xen_evtchn_do_upcall Ian Campbell
2012-10-24 13:39   ` xen_evtchn_do_upcall David Xu
2012-10-24 13:54     ` xen_evtchn_do_upcall Ian Campbell
2012-10-24 14:23       ` David Xu [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CAGjowiTDPreE7RadXEMJ+017Ve7jfQ3oBZ5CGp5uksqN_TK01g@mail.gmail.com \
    --to=davidxu06@gmail.com \
    --cc=Ian.Campbell@citrix.com \
    --cc=xen-devel@lists.xen.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).