From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Wei Liu <wei.liu2@citrix.com>
Cc: ian.campbell@citrix.com, netdev@vger.kernel.org,
xen-devel@lists.xensource.com, david.vrabel@citrix.com,
paul.durrant@citrix.com
Subject: Re: [RFC PATCH V2] New Xen netback implementation
Date: Fri, 27 Jan 2012 14:22:14 -0500 [thread overview]
Message-ID: <20120127192214.GA14437@phenom.dumpdata.com> (raw)
In-Reply-To: <1326808024-3744-1-git-send-email-wei.liu2@citrix.com>
On Tue, Jan 17, 2012 at 01:46:56PM +0000, Wei Liu wrote:
> A new netback implementation which includes three major features:
>
> - Global page pool support
> - NAPI + kthread 1:1 model
> - Netback internal name changes
>
> Changes in V2:
> - Fix minor bugs in V1
> - Embed pending_tx_info into page pool
> - Per-cpu scratch space
> - Notification code path clean up
>
> This patch series is the foundation of furture work. So it is better
> to get it right first. Patch 1 and 3 have the real meat.
I've been playing with these patches and couple of things
came to my mind:
- would it make sense to also register to the shrinker API? This way
if the host is running low on memory it can squeeze it out of the
pool code. Perhaps a future TODO..
- I like the pool code. I was thinking that perhaps (in the future)
it could be used by blkback as well, as it runs into "not enought
request structure" with the default setting. And making this dynamic
would be pretty sweet.
- This patch set solves the CPU banding problem I've seen with the
older netback. The older one I could see X netback threads eating 80%
of CPU. With this one, the number is down to 13-14%.
So you can definitly stick 'Tested-by: Konrad.." on them. And definitly
Reviewed-by on the first two - hadn't had a chance to look at the rest.
>
> The first benifit of 1:1 model will be scheduling fairness.
>
> The rational behind a global page pool is that we need to limit
> overall memory consumed by all vifs.
>
> Utilization of NAPI enables the possibility to mitigate
> interrupts/events, the code path is cleaned up in a separated patch.
>
> Netback internal changes cleans up the code structure after switching
> to 1:1 model. It also prepares netback for further code layout
> changes.
>
> ---
> drivers/net/xen-netback/Makefile | 2 +-
> drivers/net/xen-netback/common.h | 78 ++--
> drivers/net/xen-netback/interface.c | 117 ++++--
> drivers/net/xen-netback/netback.c | 836 ++++++++++++++---------------------
> drivers/net/xen-netback/page_pool.c | 185 ++++++++
> drivers/net/xen-netback/page_pool.h | 66 +++
> drivers/net/xen-netback/xenbus.c | 6 +-
> 7 files changed, 704 insertions(+), 586 deletions(-)
>
next prev parent reply other threads:[~2012-01-27 19:24 UTC|newest]
Thread overview: 21+ messages / expand[flat|nested] mbox.gz Atom feed top
2012-01-17 13:46 [RFC PATCH V2] New Xen netback implementation Wei Liu
2012-01-17 13:46 ` [RFC PATCH V2 1/8] netback: page pool version 1 Wei Liu
2012-01-17 13:46 ` [RFC PATCH V2 2/8] netback: add module unload function Wei Liu
2012-01-17 13:46 ` [RFC PATCH V2 3/8] netback: switch to NAPI + kthread model Wei Liu
2012-01-17 17:07 ` Stephen Hemminger
2012-01-17 17:11 ` Wei Liu
2012-01-17 13:47 ` [RFC PATCH V2 4/8] netback: switch to per-cpu scratch space Wei Liu
2012-01-17 13:47 ` [RFC PATCH V2 5/8] netback: add module get/put operations along with vif connect/disconnect Wei Liu
2012-01-17 13:47 ` [RFC PATCH V2 6/8] netback: melt xen_netbk into xenvif Wei Liu
2012-01-17 13:47 ` [RFC PATCH V2 7/8] netback: alter internal function/structure names Wei Liu
2012-01-17 13:47 ` [RFC PATCH V2 8/8] netback: remove unwanted notification generation during NAPI processing Wei Liu
2012-01-27 19:22 ` Konrad Rzeszutek Wilk [this message]
2012-01-29 13:42 ` [RFC PATCH V2] New Xen netback implementation Wei Liu
2012-01-29 21:37 ` Konrad Rzeszutek Wilk
2012-01-30 15:01 ` Wei Liu
2012-01-30 18:27 ` Wei Liu
2012-01-30 18:30 ` Wei Liu
2012-01-30 19:41 ` Wei Liu
2012-01-30 15:07 ` Ian Campbell
2012-01-30 15:21 ` Konrad Rzeszutek Wilk
2012-01-30 15:49 ` Ian Campbell
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20120127192214.GA14437@phenom.dumpdata.com \
--to=konrad.wilk@oracle.com \
--cc=david.vrabel@citrix.com \
--cc=ian.campbell@citrix.com \
--cc=netdev@vger.kernel.org \
--cc=paul.durrant@citrix.com \
--cc=wei.liu2@citrix.com \
--cc=xen-devel@lists.xensource.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).