From: Przemek Kitszel <przemyslaw.kitszel@intel.com>
To: Jiri Pirko <jiri@resnulli.us>
Cc: <intel-wired-lan@lists.osuosl.org>,
Tony Nguyen <anthony.l.nguyen@intel.com>,
Jakub Kicinski <kuba@kernel.org>,
Cosmin Ratiu <cratiu@nvidia.com>,
Tariq Toukan <tariqt@nvidia.com>, <netdev@vger.kernel.org>,
Konrad Knitter <konrad.knitter@intel.com>,
"Jacob Keller" <jacob.e.keller@intel.com>, <davem@davemloft.net>,
Eric Dumazet <edumazet@google.com>,
Paolo Abeni <pabeni@redhat.com>, Andrew Lunn <andrew@lunn.ch>,
<linux-kernel@vger.kernel.org>,
ITP Upstream <nxne.cnse.osdt.itp.upstreaming@intel.com>,
Carolina Jubran <cjubran@nvidia.com>
Subject: Re: [RFC net-next v2 1/2] devlink: add whole device devlink instance
Date: Wed, 26 Feb 2025 16:06:19 +0100 [thread overview]
Message-ID: <31477321-c064-4f3d-b4c9-e858d98d5694@intel.com> (raw)
In-Reply-To: <iiemy2zwko4iehuw6cgbipszcxonanjpumxzv4nbdvgvdgi5fx@jz3hkez3lygw>
On 2/26/25 15:48, Jiri Pirko wrote:
> Tue, Feb 25, 2025 at 04:40:49PM +0100, przemyslaw.kitszel@intel.com wrote:
>> On 2/25/25 15:35, Jiri Pirko wrote:
>>> Tue, Feb 25, 2025 at 12:30:49PM +0100, przemyslaw.kitszel@intel.com wrote:
>
> [...]
>
>>>> output, for all PFs and VFs on given device:
>>>>
>>>> pci/0000:af:00:
>>>> name rss size 8 unit entry size_min 0 size_max 24 size_gran 1
>>>> resources:
>>>> name lut_512 size 0 unit entry size_min 0 size_max 16 size_gran 1
>>>> name lut_2048 size 8 unit entry size_min 0 size_max 8 size_gran 1
>>>>
>>>> What is contributing to the hardness, this is not just one for all ice
>>>> PFs, but one per device, which we distinguish via pci BDF.
>>>
>>> How?
>>
>> code is in ice_adapter_index()
>
> If you pass 2 pfs of the same device to a VM with random BDF, you get 2
> ice_adapters, correct?
Right now, yes
>
> [...]
What I want is to keep two ice_adapters for two actual devices (SDNs)
next prev parent reply other threads:[~2025-02-26 15:06 UTC|newest]
Thread overview: 19+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-02-19 16:32 [RFC net-next v2 0/2] devlink: whole-device, resource .occ_set() Przemek Kitszel
2025-02-19 16:32 ` [RFC net-next v2 1/2] devlink: add whole device devlink instance Przemek Kitszel
2025-02-19 22:11 ` Jacob Keller
2025-02-21 1:45 ` Jakub Kicinski
2025-02-21 22:50 ` Jacob Keller
2025-02-24 10:15 ` Przemek Kitszel
2025-02-24 13:03 ` Jiri Pirko
2025-02-24 22:09 ` Jacob Keller
2025-02-24 16:14 ` Jiri Pirko
2025-02-24 22:12 ` Jacob Keller
2025-02-25 11:30 ` Przemek Kitszel
2025-02-25 14:35 ` Jiri Pirko
2025-02-25 15:40 ` Przemek Kitszel
2025-02-25 18:16 ` Jacob Keller
2025-02-26 14:48 ` Jiri Pirko
2025-02-26 15:06 ` Przemek Kitszel [this message]
2025-02-26 15:25 ` Jiri Pirko
2025-03-18 15:42 ` Jiri Pirko
2025-02-19 16:32 ` [RFC net-next v2 2/2] devlink: give user option to allocate resources Przemek Kitszel
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=31477321-c064-4f3d-b4c9-e858d98d5694@intel.com \
--to=przemyslaw.kitszel@intel.com \
--cc=andrew@lunn.ch \
--cc=anthony.l.nguyen@intel.com \
--cc=cjubran@nvidia.com \
--cc=cratiu@nvidia.com \
--cc=davem@davemloft.net \
--cc=edumazet@google.com \
--cc=intel-wired-lan@lists.osuosl.org \
--cc=jacob.e.keller@intel.com \
--cc=jiri@resnulli.us \
--cc=konrad.knitter@intel.com \
--cc=kuba@kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=netdev@vger.kernel.org \
--cc=nxne.cnse.osdt.itp.upstreaming@intel.com \
--cc=pabeni@redhat.com \
--cc=tariqt@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox