From: Alexander Duyck <alexander.h.duyck@redhat.com>
To: "Rustad, Mark D" <mark.d.rustad@intel.com>
Cc: "bhelgaas@google.com" <bhelgaas@google.com>,
"linux-pci@vger.kernel.org" <linux-pci@vger.kernel.org>,
"intel-wired-lan@lists.osuosl.org"
<intel-wired-lan@lists.osuosl.org>,
"netdev@vger.kernel.org" <netdev@vger.kernel.org>
Subject: Re: [Intel-wired-lan] [PATCH] pci: Limit VPD reads for all Intel Ethernet devices
Date: Tue, 19 May 2015 13:39:16 -0700 [thread overview]
Message-ID: <555B9F74.3040004@redhat.com> (raw)
In-Reply-To: <99014381-25AB-416D-8E09-C431B5CD5A6C@intel.com>
On 05/19/2015 11:38 AM, Rustad, Mark D wrote:
>> On May 19, 2015, at 10:50 AM, Alexander Duyck <alexander.h.duyck@redhat.com> wrote:
>>
>> These two patches are very much related.
> They are only related because I saw an opportunity to do this while working on the other issue. That is the only relationship.
>
> <snip material on the other patch>
>
>>>> Artificially limiting the size of the VPD does nothing but cut off possibly useful data, you would be better of providing all of the data on only the first function than providing only partial data on all functions and adding extra lock overhead.
>>> This limit only limits the maximum that the OS will read to what is architecturally possible in these devices. Yes, PCIe architecturally provides for the possibility of more, but these devices do not. More repeating data can be read, however slowly, but there is no possibility of useful content beyond the first 1K. If this limit were set to 0x100, which is more in line what the actual usage is, it would be an artificial limit, but at 1K it is not. Oh and it does include devices made by others that incorporate Intel Ethernet silicon, not just Intel-built devices.
>> As per section 3.4.4 of the X540 datasheet the upper addressable range for the VPD section is 0xFFF which means the upper limit for the hardware is 0x1000, not 0x400.
> Ok. I have no problem changing it to that. I had been looking at other specs, but 0x1000 really is a hard limit.
>
> <snip more material mostly relating to the other patch>
So how does this improve boot time anyway? The original patch
description said this improved boot time and reduced memory usage but I
have yet to find where any of those gains would actually occur. If you
can point me in that direction I might have a better idea of the
motivations behind this.
- Alex
next prev parent reply other threads:[~2015-05-19 20:39 UTC|newest]
Thread overview: 10+ messages / expand[flat|nested] mbox.gz Atom feed top
2015-05-19 0:00 [PATCH] pci: Limit VPD reads for all Intel Ethernet devices Mark D Rustad
2015-05-19 15:54 ` [Intel-wired-lan] " Alexander Duyck
2015-05-19 16:19 ` Rustad, Mark D
2015-05-19 17:50 ` Alexander Duyck
2015-05-19 18:38 ` Rustad, Mark D
2015-05-19 20:39 ` Alexander Duyck [this message]
2015-05-19 21:04 ` Rustad, Mark D
2015-05-19 21:17 ` Alexander Duyck
2015-05-19 22:43 ` Rustad, Mark D
2015-05-19 23:42 ` Alexander Duyck
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=555B9F74.3040004@redhat.com \
--to=alexander.h.duyck@redhat.com \
--cc=bhelgaas@google.com \
--cc=intel-wired-lan@lists.osuosl.org \
--cc=linux-pci@vger.kernel.org \
--cc=mark.d.rustad@intel.com \
--cc=netdev@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).