From: Ben Hutchings <bhutchings@solarflare.com>
To: John Fastabend <john.r.fastabend@intel.com>, eric.dumazet@gmail.com
Cc: davem@davemloft.net, jarkao2@gmail.com, hadi@cyberus.ca,
shemminger@vyatta.com, tgraf@infradead.org,
nhorman@tuxdriver.com, netdev@vger.kernel.org
Subject: Re: [net-next-2.6 PATCH v5 1/2] net: implement mechanism for HW based QOS
Date: Thu, 06 Jan 2011 18:20:15 +0000 [thread overview]
Message-ID: <1294338015.11825.26.camel@bwh-desktop> (raw)
In-Reply-To: <20110104185600.13692.47967.stgit@jf-dev1-dcblab>
On Tue, 2011-01-04 at 10:56 -0800, John Fastabend wrote:
[...]
> diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h
> index 0f6b1c9..ae51323 100644
> --- a/include/linux/netdevice.h
> +++ b/include/linux/netdevice.h
> @@ -646,6 +646,14 @@ struct xps_dev_maps {
> (nr_cpu_ids * sizeof(struct xps_map *)))
> #endif /* CONFIG_XPS */
>
> +#define TC_MAX_QUEUE 16
> +#define TC_BITMASK 15
> +/* HW offloaded queuing disciplines txq count and offset maps */
> +struct netdev_tc_txq {
> + u16 count;
> + u16 offset;
> +};
> +
> /*
> * This structure defines the management hooks for network devices.
> * The following hooks can be defined; unless noted otherwise, they are
> @@ -1146,6 +1154,9 @@ struct net_device {
> /* Data Center Bridging netlink ops */
> const struct dcbnl_rtnl_ops *dcbnl_ops;
> #endif
> + u8 num_tc;
> + struct netdev_tc_txq tc_to_txq[TC_MAX_QUEUE];
> + u8 prio_tc_map[TC_BITMASK+1];
[...]
I'm still concerned by the addition of all this state to every
net_device. From previous discussion, Eric wanted this, citing 'false
sharing' while Stephen thought it should be accessed indirectly.
Eric, when you refer to 'false sharing' do you mean that the TC state
might end up sharing a cache line with some other data? That seems
quite unlikely as the allocation size will be 128 bytes, and it could be
padded to fill a cache line if that's still a concern.
Ben.
--
Ben Hutchings, Senior Software Engineer, Solarflare Communications
Not speaking for my employer; that's the marketing department's job.
They asked us to note that Solarflare product names are trademarked.
next prev parent reply other threads:[~2011-01-06 18:20 UTC|newest]
Thread overview: 12+ messages / expand[flat|nested] mbox.gz Atom feed top
2011-01-04 18:56 [net-next-2.6 PATCH v5 1/2] net: implement mechanism for HW based QOS John Fastabend
2011-01-04 18:56 ` [net-next-2.6 PATCH v5 2/2] net_sched: implement a root container qdisc sch_mqprio John Fastabend
2011-01-04 22:59 ` Jarek Poplawski
2011-01-05 17:38 ` John Fastabend
2011-01-05 17:47 ` Stephen Hemminger
2011-01-06 0:32 ` Jarek Poplawski
2011-01-06 2:31 ` John Fastabend
2011-01-06 15:41 ` Jarek Poplawski
2011-01-06 18:20 ` Ben Hutchings [this message]
2011-01-06 18:31 ` [net-next-2.6 PATCH v5 1/2] net: implement mechanism for HW based QOS Eric Dumazet
2011-01-06 18:45 ` John Fastabend
2011-01-06 18:56 ` Eric Dumazet
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1294338015.11825.26.camel@bwh-desktop \
--to=bhutchings@solarflare.com \
--cc=davem@davemloft.net \
--cc=eric.dumazet@gmail.com \
--cc=hadi@cyberus.ca \
--cc=jarkao2@gmail.com \
--cc=john.r.fastabend@intel.com \
--cc=netdev@vger.kernel.org \
--cc=nhorman@tuxdriver.com \
--cc=shemminger@vyatta.com \
--cc=tgraf@infradead.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).