From: Jason Gunthorpe <jgunthorpe@obsidianresearch.com>
To: netdev@vger.kernel.org, Daniel Borkmann <daniel@iogearbox.net>,
Mitch Williams <mitch.a.williams@intel.com>,
Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Cc: Nicolas Dichtel <nicolas.dichtel@6wind.com>,
Jiri Pirko <jiri@resnulli.us>, Thomas Graf <tgraf@suug.ch>,
"David S. Miller" <davem@davemloft.net>
Subject: [PATCH] rtnetlink: Actually use the policy for the IFLA_VF_INFO
Date: Tue, 30 Jun 2015 16:52:55 -0600 [thread overview]
Message-ID: <20150630225255.GA22529@obsidianresearch.com> (raw)
It turns out the policy was defined but never actually checked,
so lets check it.
Fixes: ebc08a6f47ee ("rtnetlink: Add VF config code to rtnetlink")
Signed-off-by: Jason Gunthorpe <jgunthorpe@obsidianresearch.com>
---
net/core/rtnetlink.c | 13 +++++++++++++
1 file changed, 13 insertions(+)
DaveM: This shouldn't be applied until someone with the hardware that
uses this can test it to make sure the policy is actually correct and
matches what iproute does..
I noticed this by inspection when investigating how to properly use
netlink in another area. Compile tested only.
Daniel/Mitch/Jeff: Can you test this?
I suspect the absence of these checks allows user space to cause a
read past the end of a buffer?
I dropped the ifla_vfinfo_policy to match how
IFLA_VF_PORTS/IFLA_VF_PORT is working but I wonder if that should be
changed as well?
Thanks,
Jason
diff --git a/net/core/rtnetlink.c b/net/core/rtnetlink.c
index a2b90e1fc115..7d5dc347bf7c 100644
--- a/net/core/rtnetlink.c
+++ b/net/core/rtnetlink.c
@@ -1258,10 +1258,6 @@ static const struct nla_policy ifla_info_policy[IFLA_INFO_MAX+1] = {
[IFLA_INFO_SLAVE_DATA] = { .type = NLA_NESTED },
};
-static const struct nla_policy ifla_vfinfo_policy[IFLA_VF_INFO_MAX+1] = {
- [IFLA_VF_INFO] = { .type = NLA_NESTED },
-};
-
static const struct nla_policy ifla_vf_policy[IFLA_VF_MAX+1] = {
[IFLA_VF_MAC] = { .len = sizeof(struct ifla_vf_mac) },
[IFLA_VF_VLAN] = { .len = sizeof(struct ifla_vf_vlan) },
@@ -1681,6 +1677,7 @@ static int do_setlink(const struct sk_buff *skb,
}
if (tb[IFLA_VFINFO_LIST]) {
+ struct nlattr *vf_attrs[IFLA_VF_MAX + 1];
struct nlattr *attr;
int rem;
nla_for_each_nested(attr, tb[IFLA_VFINFO_LIST], rem) {
@@ -1688,6 +1685,10 @@ static int do_setlink(const struct sk_buff *skb,
err = -EINVAL;
goto errout;
}
+ err = nla_parse_nested(vf_attrs, IFLA_VF_MAX, attr,
+ ifla_vf_policy);
+ if (err < 0)
+ goto errout;
err = do_setvfinfo(dev, attr);
if (err < 0)
goto errout;
--
2.1.4
next reply other threads:[~2015-06-30 22:53 UTC|newest]
Thread overview: 6+ messages / expand[flat|nested] mbox.gz Atom feed top
2015-06-30 22:52 Jason Gunthorpe [this message]
2015-07-01 9:36 ` [PATCH] rtnetlink: Actually use the policy for the IFLA_VF_INFO Daniel Borkmann
2015-07-02 8:34 ` Daniel Borkmann
2015-07-02 23:06 ` Jason Gunthorpe
2015-07-03 21:52 ` Daniel Borkmann
2015-07-02 16:23 ` Jason Gunthorpe
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20150630225255.GA22529@obsidianresearch.com \
--to=jgunthorpe@obsidianresearch.com \
--cc=daniel@iogearbox.net \
--cc=davem@davemloft.net \
--cc=jeffrey.t.kirsher@intel.com \
--cc=jiri@resnulli.us \
--cc=mitch.a.williams@intel.com \
--cc=netdev@vger.kernel.org \
--cc=nicolas.dichtel@6wind.com \
--cc=tgraf@suug.ch \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).