netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Alexander Duyck <alexander.h.duyck@redhat.com>
To: "Rustad, Mark D" <mark.d.rustad@intel.com>,
	Alexander Duyck <alexander.duyck@gmail.com>
Cc: "bhelgaas@google.com" <bhelgaas@google.com>,
	"linux-pci@vger.kernel.org" <linux-pci@vger.kernel.org>,
	"intel-wired-lan@lists.osuosl.org"
	<intel-wired-lan@lists.osuosl.org>,
	"netdev@vger.kernel.org" <netdev@vger.kernel.org>
Subject: Re: [Intel-wired-lan] [PATCH] pci: Limit VPD reads for all Intel Ethernet devices
Date: Tue, 19 May 2015 16:42:02 -0700	[thread overview]
Message-ID: <555BCA4A.2080400@redhat.com> (raw)
In-Reply-To: <46CDD500-2A1B-4457-BE17-961AFC50769E@intel.com>



On 05/19/2015 03:43 PM, Rustad, Mark D wrote:
>> On May 19, 2015, at 2:17 PM, Alexander Duyck <alexander.duyck@gmail.com> wrote:
>>
>> Any chance you could point me toward the software in question?  Just wondering because it seems like what you are fixing with this is an implementation issue in the application since you really shouldn't be accessing areas outside the scope of the VPD data structure, and doing so is undefined in terms of what happens if you do.
> I don't have it, but if you dump VPD via sysfs you will see it comes out as 32k in size. The kernel just blindly provides access to the full 32K space provided by the spec. I'm sure that we agree that the kernel should not go parse it and find the actual size. If it is read via stdio, say fread, the read access would be whatever buffer size it chooses to use.
>
> If you looked at the quirks, you might have noticed that Broadcom limited the VPD access for some devices for functional reasons. That is what gave me the idea for limiting access to what was possibly there. With the existing Intel Ethernet quirk, it seemed like a simple thing to do.

Actually we probably should be parsing through the VPD data.  The PCIe 
spec doesn't define what happens if you read past the end marker, and I 
suspect most applications are probably performing sequential reads of 
the data instead of just accessing offsets anyway since that is how this 
is really meant to be accessed.  So if we moved this to a sequenced 
interface instead of a memory mapped style interface it would probably 
work out better anyway since we could perform multiple reads in sequence 
instead of one at a time.

- Alex

      reply	other threads:[~2015-05-19 23:42 UTC|newest]

Thread overview: 10+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2015-05-19  0:00 [PATCH] pci: Limit VPD reads for all Intel Ethernet devices Mark D Rustad
2015-05-19 15:54 ` [Intel-wired-lan] " Alexander Duyck
2015-05-19 16:19   ` Rustad, Mark D
2015-05-19 17:50     ` Alexander Duyck
2015-05-19 18:38       ` Rustad, Mark D
2015-05-19 20:39         ` Alexander Duyck
2015-05-19 21:04           ` Rustad, Mark D
2015-05-19 21:17             ` Alexander Duyck
2015-05-19 22:43               ` Rustad, Mark D
2015-05-19 23:42                 ` Alexander Duyck [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=555BCA4A.2080400@redhat.com \
    --to=alexander.h.duyck@redhat.com \
    --cc=alexander.duyck@gmail.com \
    --cc=bhelgaas@google.com \
    --cc=intel-wired-lan@lists.osuosl.org \
    --cc=linux-pci@vger.kernel.org \
    --cc=mark.d.rustad@intel.com \
    --cc=netdev@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).