dev.dpdk.org archive mirror
 help / color / mirror / Atom feed
From: Ferruh Yigit <ferruh.yigit@intel.com>
To: Gowrishankar <gowrishankar.m@linux.vnet.ibm.com>,
	Declan Doherty <declan.doherty@intel.com>
Cc: dev@dpdk.org, Chao Zhu <chaozhu@linux.vnet.ibm.com>
Subject: Re: [PATCH] net/bonding: enable bonding pmd in ppc64le
Date: Thu, 15 Jun 2017 15:05:12 +0100	[thread overview]
Message-ID: <2c88c152-6571-a385-135e-e8b42f04636f@intel.com> (raw)
In-Reply-To: <57e42960f186632135099eb2d36ccd9d5cb67686.1497435045.git.gowrishankar.m@linux.vnet.ibm.com>

On 6/14/2017 11:16 AM, Gowrishankar wrote:
> From: Gowrishankar Muthukrishnan <gowrishankar.m@linux.vnet.ibm.com>
> 
> Earlier bonding pmd was disabled in default config for ppc64le. Same has
> been verified, with active-backup mode for an instance (to bond two VFs in each
> phy port):
> 
> testpmd-bonding-cmd.txt:
> create bonded device 1 0
> create bonded device 1 0
> add bonding slave 0 4
> add bonding slave 1 4
> add bonding slave 2 5
> add bonding slave 3 5
> set bonding primary 0 4
> set bonding primary 2 5
> port start 4
> port start 5
> show bonding config 4
> show bonding config 5
> set portlist 4,5
> 
> ./ppc_64-power8-linuxapp-gcc/app/testpmd -l 0,8,16
>   -b 0002:01:00.0 -b 0002:01:00.1
>   --socket-mem 512,512
>   -- -i --cmdline-file=../testpmd-bonding-cmd.txt
> 
> EAL: PCI device 0002:01:00.0 on NUMA socket 1
> EAL:   Device is blacklisted, not initializing
> EAL: PCI device 0002:01:00.1 on NUMA socket 1
> EAL:   Device is blacklisted, not initializing
> EAL: PCI device 0002:01:00.2 on NUMA socket 1
> EAL:   probe driver: 15b3:1014 net_mlx5
> PMD: net_mlx5: PCI information matches, using device "mlx5_2" (SR-IOV: true, MPS: false)
> PMD: net_mlx5: 1 port(s) detected
> PMD: net_mlx5: MPS is disabled
> PMD: net_mlx5: port 1 MAC address is 00:22:33:44:55:02
> EAL: PCI device 0002:01:00.3 on NUMA socket 1
> EAL:   probe driver: 15b3:1014 net_mlx5
> PMD: net_mlx5: PCI information matches, using device "mlx5_3" (SR-IOV: true, MPS: false)
> PMD: net_mlx5: 1 port(s) detected
> PMD: net_mlx5: MPS is disabled
> PMD: net_mlx5: port 1 MAC address is 00:22:33:44:55:03
> EAL: PCI device 0002:01:00.6 on NUMA socket 1
> EAL:   probe driver: 15b3:1014 net_mlx5
> PMD: net_mlx5: PCI information matches, using device "mlx5_4" (SR-IOV: true, MPS: false)
> PMD: net_mlx5: 1 port(s) detected
> PMD: net_mlx5: MPS is disabled
> PMD: net_mlx5: port 1 MAC address is 00:22:33:44:55:06
> EAL: PCI device 0002:01:00.7 on NUMA socket 1
> EAL:   probe driver: 15b3:1014 net_mlx5
> PMD: net_mlx5: PCI information matches, using device "mlx5_5" (SR-IOV: true, MPS: false)
> PMD: net_mlx5: 1 port(s) detected
> PMD: net_mlx5: MPS is disabled
> PMD: net_mlx5: port 1 MAC address is 00:22:33:44:55:07
> Interactive-mode selected
> CLI commands to be read from ../testpmd-bonding-cmd.txt
> USER1: create a new mbuf pool <mbuf_pool_socket_0>: n=163456, size=2176, socket=0
> USER1: create a new mbuf pool <mbuf_pool_socket_1>: n=163456, size=2176, socket=1
> Configuring Port 0 (socket 1)
> PMD: net_mlx5: 0x33518f80: TX queues number update: 0 -> 1
> PMD: net_mlx5: 0x33518f80: RX queues number update: 0 -> 1
> Port 0: 00:22:33:44:55:02
> Configuring Port 1 (socket 1)
> PMD: net_mlx5: 0x3351d000: TX queues number update: 0 -> 1
> PMD: net_mlx5: 0x3351d000: RX queues number update: 0 -> 1
> Port 1: 00:22:33:44:55:03
> Configuring Port 2 (socket 1)
> PMD: net_mlx5: 0x33521080: TX queues number update: 0 -> 1
> PMD: net_mlx5: 0x33521080: RX queues number update: 0 -> 1
> Port 2: 00:22:33:44:55:06
> Configuring Port 3 (socket 1)
> PMD: net_mlx5: 0x33525100: TX queues number update: 0 -> 1
> PMD: net_mlx5: 0x33525100: RX queues number update: 0 -> 1
> Port 3: 00:22:33:44:55:07
> Checking link statuses...
> Done
> EAL: Initializing pmd_bond for net_bond_testpmd_0
> EAL: Create bonded device net_bond_testpmd_0 on port 4 in mode 1 on socket 0.
> Created new bonded device net_bond_testpmd_0 on (port 4).
> EAL: Initializing pmd_bond for net_bond_testpmd_1
> EAL: Create bonded device net_bond_testpmd_1 on port 5 in mode 1 on socket 0.
> Created new bonded device net_bond_testpmd_1 on (port 5).
> Configuring Port 4 (socket 0)
> 
> Port 4: LSC event
> Port 4: 00:22:33:44:55:02
> Checking link statuses...
> Done
> Configuring Port 5 (socket 0)
> 
> Port 5: LSC event
> Port 5: 00:22:33:44:55:06
> Checking link statuses...
> Done
>     Bonding mode: 1
>     Slaves (2): [0 1]
>     Active Slaves (2): [0 1]
>     Primary: [0]
>     Bonding mode: 1
>     Slaves (2): [2 3]
>     Active Slaves (2): [2 3]
>     Primary: [2]
> previous number of forwarding ports 4 - changed to number of configured ports 2
> Read CLI commands from ../testpmd-bonding-cmd.txt
> testpmd> start
> 
> Signed-off-by: Gowrishankar Muthukrishnan <gowrishankar.m@linux.vnet.ibm.com>
> ---
>  config/defconfig_ppc_64-power8-linuxapp-gcc | 2 +>  1 file changed, 1 insertion(+), 1 deletion(-)

Can you please update bonding features file [1] to announce PowerPC support.

btw, I have recognized that bonding hasn't documented any feature yet,
having only "Power8" will look confusing, but hopefully we can document
more for this release.

Anyone willing to document bonding features? Declan :)

[1]
doc/guides/nics/features/bonding.ini

> 
> diff --git a/config/defconfig_ppc_64-power8-linuxapp-gcc b/config/defconfig_ppc_64-power8-linuxapp-gcc
> index 71e4c35..4fce585 100644
> --- a/config/defconfig_ppc_64-power8-linuxapp-gcc
> +++ b/config/defconfig_ppc_64-power8-linuxapp-gcc
> @@ -51,7 +51,7 @@ CONFIG_RTE_LIBRTE_EAL_VMWARE_TSC_MAP_SUPPORT=n
>  CONFIG_RTE_LIBRTE_IXGBE_PMD=n
>  CONFIG_RTE_LIBRTE_VIRTIO_PMD=y
>  CONFIG_RTE_LIBRTE_VMXNET3_PMD=n
> -CONFIG_RTE_LIBRTE_PMD_BOND=n
> +CONFIG_RTE_LIBRTE_PMD_BOND=y

Bonding is enabled in base config, removing this line will enable it.

>  CONFIG_RTE_LIBRTE_ENIC_PMD=n
>  CONFIG_RTE_LIBRTE_FM10K_PMD=n
>  CONFIG_RTE_LIBRTE_SFC_EFX_PMD=n
> 

  reply	other threads:[~2017-06-15 14:05 UTC|newest]

Thread overview: 4+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-06-14 10:16 [PATCH] net/bonding: enable bonding pmd in ppc64le Gowrishankar
2017-06-15 14:05 ` Ferruh Yigit [this message]
2017-07-31 12:54 ` [PATCH v2] " Gowrishankar
2017-07-31 13:09   ` Ferruh Yigit

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=2c88c152-6571-a385-135e-e8b42f04636f@intel.com \
    --to=ferruh.yigit@intel.com \
    --cc=chaozhu@linux.vnet.ibm.com \
    --cc=declan.doherty@intel.com \
    --cc=dev@dpdk.org \
    --cc=gowrishankar.m@linux.vnet.ibm.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).