From: Wei Liu <wei.liu2@citrix.com>
To: ian.campbell@citrix.com, netdev@vger.kernel.org,
xen-devel@lists.xensource.com
Cc: konrad.wilk@oracle.com, david.vrabel@citrix.com, paul.durrant@citrix.com
Subject: [RFC PATCH V2] New Xen netback implementation
Date: Tue, 17 Jan 2012 13:46:56 +0000 [thread overview]
Message-ID: <1326808024-3744-1-git-send-email-wei.liu2@citrix.com> (raw)
A new netback implementation which includes three major features:
- Global page pool support
- NAPI + kthread 1:1 model
- Netback internal name changes
Changes in V2:
- Fix minor bugs in V1
- Embed pending_tx_info into page pool
- Per-cpu scratch space
- Notification code path clean up
This patch series is the foundation of furture work. So it is better
to get it right first. Patch 1 and 3 have the real meat.
The first benifit of 1:1 model will be scheduling fairness.
The rational behind a global page pool is that we need to limit
overall memory consumed by all vifs.
Utilization of NAPI enables the possibility to mitigate
interrupts/events, the code path is cleaned up in a separated patch.
Netback internal changes cleans up the code structure after switching
to 1:1 model. It also prepares netback for further code layout
changes.
---
drivers/net/xen-netback/Makefile | 2 +-
drivers/net/xen-netback/common.h | 78 ++--
drivers/net/xen-netback/interface.c | 117 ++++--
drivers/net/xen-netback/netback.c | 836 ++++++++++++++---------------------
drivers/net/xen-netback/page_pool.c | 185 ++++++++
drivers/net/xen-netback/page_pool.h | 66 +++
drivers/net/xen-netback/xenbus.c | 6 +-
7 files changed, 704 insertions(+), 586 deletions(-)
next reply other threads:[~2012-01-17 13:48 UTC|newest]
Thread overview: 21+ messages / expand[flat|nested] mbox.gz Atom feed top
2012-01-17 13:46 Wei Liu [this message]
2012-01-17 13:46 ` [RFC PATCH V2 1/8] netback: page pool version 1 Wei Liu
2012-01-17 13:46 ` [RFC PATCH V2 2/8] netback: add module unload function Wei Liu
2012-01-17 13:46 ` [RFC PATCH V2 3/8] netback: switch to NAPI + kthread model Wei Liu
2012-01-17 17:07 ` Stephen Hemminger
2012-01-17 17:11 ` Wei Liu
2012-01-17 13:47 ` [RFC PATCH V2 4/8] netback: switch to per-cpu scratch space Wei Liu
2012-01-17 13:47 ` [RFC PATCH V2 5/8] netback: add module get/put operations along with vif connect/disconnect Wei Liu
2012-01-17 13:47 ` [RFC PATCH V2 6/8] netback: melt xen_netbk into xenvif Wei Liu
2012-01-17 13:47 ` [RFC PATCH V2 7/8] netback: alter internal function/structure names Wei Liu
2012-01-17 13:47 ` [RFC PATCH V2 8/8] netback: remove unwanted notification generation during NAPI processing Wei Liu
2012-01-27 19:22 ` [RFC PATCH V2] New Xen netback implementation Konrad Rzeszutek Wilk
2012-01-29 13:42 ` Wei Liu
2012-01-29 21:37 ` Konrad Rzeszutek Wilk
2012-01-30 15:01 ` Wei Liu
2012-01-30 18:27 ` Wei Liu
2012-01-30 18:30 ` Wei Liu
2012-01-30 19:41 ` Wei Liu
2012-01-30 15:07 ` Ian Campbell
2012-01-30 15:21 ` Konrad Rzeszutek Wilk
2012-01-30 15:49 ` Ian Campbell
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1326808024-3744-1-git-send-email-wei.liu2@citrix.com \
--to=wei.liu2@citrix.com \
--cc=david.vrabel@citrix.com \
--cc=ian.campbell@citrix.com \
--cc=konrad.wilk@oracle.com \
--cc=netdev@vger.kernel.org \
--cc=paul.durrant@citrix.com \
--cc=xen-devel@lists.xensource.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).