From mboxrd@z Thu Jan 1 00:00:00 1970 From: Ferruh Yigit Subject: Re: [PATCH v2] net/pcap: rx_iface_in stream type support Date: Mon, 18 Jun 2018 09:13:15 +0100 Message-ID: <4c6b45cf-f8e2-436c-c06c-58d9ff2cd0db@intel.com> References: <1528191584-46149-1-git-send-email-ido@cgstowernetworks.com> <7f089b9b-c5da-5358-08c7-38079f5e38b3@intel.com> <47bb9ab0-eee9-00cc-5e57-3cc79efcd417@intel.com> Mime-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit Cc: "dev@dpdk.org" To: Ido Goshen Return-path: Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by dpdk.org (Postfix) with ESMTP id 1CC1D98 for ; Mon, 18 Jun 2018 10:13:17 +0200 (CEST) In-Reply-To: Content-Language: en-US List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" On 6/16/2018 10:27 AM, Ido Goshen wrote: > Is pcap_sendpacket() to the same pcap_t handle thread-safe? I couldn't find clear answer so I'd rather assume not. > If it's not thread-safe then supporting multiple "iface"'s will require multiple pcap_open_live()'s and we are back in the same place. I am not suggesting extra multi thread safety. Currently in "iface" path, following is hardcoded: pcaps.num_of_queue = 1; dumpers.num_of_queue = 1; It can be possible to update that path to support multiple queue while using "iface" devargs. > >>> I am not sure exiting behavior is intentional, which is capturing sent packages in Rx pcap handler to same interface. >>> Are you aware of any use case of existing behavior? Perhaps it can be option to set PCAP_D_IN by default for rx_iface argument. > Even if unintentional I find it very useful for testing, as this way it's very easy to send traffic to the app by tcpreplay on the same host the app is running on. > Using tcpreplay is in the out direction that will not be captured if PCAP_D_IN is set. > If PCAP_D_IN is the only option then it will require external device (or some networking trick) to send packets to the app. > So, I'd say it is good for testing but less good for real functionality OK to keep it if there is a usecase. > > -----Original Message----- > From: Ferruh Yigit > Sent: Friday, June 15, 2018 3:53 PM > To: Ido Goshen > Cc: dev@dpdk.org > Subject: Re: [PATCH v2] net/pcap: rx_iface_in stream type support > > On 6/14/2018 9:44 PM, Ido Goshen wrote: >> I think we are starting to mix two things One is how to configure pcap >> eth dev with multiple queues and I totally agree it would have been nicer to just say something like "max_tx_queues =N" instead of needing to write "tx_iface" N times, but as it was already supported in that way (for any reason?) I wasn't trying to enhance or change it. >> The other issue is pcap direction API, which I was trying to expose to users of dpdk pcap device. > > Hi Ido, > > Assuming "iface" argument solves the direction issue, I am suggestion adding multiqueue support to "iface" argument as a solution to your problem. > > I am not suggesting using new arguments like "max_tx_queues =N", "iface" can be used same as how rx/tx_ifcase used now, provide it multiple times. > >> Refer to https://www.tcpdump.org/manpages/pcap_setdirection.3pcap.txt >> or man tcpdump for -P/--direction in|out|inout option, Actually I >> think a more realistic emulation of a physical device (non-virtual) >> would be to capture only the incoming direction (set PCAP_D_IN), again >> the existing behavior is very useful too and I didn't try to change or >> eliminate it but just add additional stream type option > > I am not sure exiting behavior is intentional, which is capturing sent packages in Rx pcap handler to same interface. > Are you aware of any use case of existing behavior? Perhaps it can be option to set PCAP_D_IN by default for rx_iface argument. > >> >> -----Original Message----- >> From: Ferruh Yigit >> Sent: Thursday, June 14, 2018 9:09 PM >> To: Ido Goshen >> Cc: dev@dpdk.org >> Subject: Re: [PATCH v2] net/pcap: rx_iface_in stream type support >> >> On 6/14/2018 6:14 PM, Ido Goshen wrote: >>> I use "rx_iface","tx_iface" (and not just "iface") in order to have >>> multiple TX queues I just gave a simplified setting with 1 queue My >>> app does a full mesh between the ports (not fixed pairs like l2fwd) >>> so all the forwarding lcores can tx to the same port simultaneously and as DPDK docs say: >>> "Multiple logical cores should never share receive or transmit queues for interfaces since this would require global locks and hinder performance." >>> For example if I have 3 ports handled by 3 cores it'll be >>> myapp -c 7 -n1 --no-huge \ >>> --vdev=eth_pcap0,rx_iface=eth0,tx_iface=eth0,tx_iface=eth0,tx_iface=eth0 \ >>> --vdev=eth_pcap0,rx_iface=eth1,tx_iface=eth1,tx_iface=eth1,tx_iface=eth1 \ >>> --vdev=eth_pcap0,rx_iface=eth2,tx_iface=eth2,tx_iface=eth2,tx_iface=eth2 \ >>> -- -p 7 >>> Is there another way to achieve multiple queues in pcap vdev? >> >> If you want to use multiple core you need multiple queues, as you said, and above is the way to create multiple queues for pcap. >> >> Currently "iface" argument only supports single interface in a hardcoded way, but technically it should be possible to update it to support multiple queue. >> >> So if "iface" arguments works for you, it can be better to add multi queue support to "iface" instead of introducing a new device argument. >> >>> >>> I do see that using "iface" behaves differently - I'll try to >>> investigate why >> >> pcap_open_live() is called for both arguments, for "rx_iface/tx_iface" pair it has been called twice one for each. Not sure if pcap library returns same handler or two different handlers for this case since iface name is same. >> For "iface" argument pcap_open_live() called once, so we have single handler for both Rx & Tx. This may be difference. >> >>> And still even when using "iface" I also see packets that are >>> transmitted out of eth1 (e.g. tcpreplay -i eth1 packets.pcap) and not >>> only packets that are received (e.g. ping from far end to eth0 ip) >> >> This is interesting, I have tried with external packet generator, "iface" was working as expected for me. >> >>> >>> >>> -----Original Message----- >>> From: Ferruh Yigit >>> Sent: Wednesday, June 13, 2018 1:57 PM >>> To: Ido Goshen >>> Cc: dev@dpdk.org >>> Subject: Re: [PATCH v2] net/pcap: rx_iface_in stream type support >>> >>> On 6/5/2018 6:10 PM, Ido Goshen wrote: >>>> The problem is if a dpdk app uses the same iface(s) both as rx_iface and tx_iface then it will receive back the packets it sends. >>>> If my app sends a packet to portid=X with rte_eth_tx_burst() then I >>>> wouldn't expect to receive it back by rte_eth_rx_burst() for that same portid=X (assuming of course there's no external loopback) This is coming from the default nature of pcap that like a sniffer captures both incoming and outgoing direction. >>>> The patch provides an option to limit pcap rx_iface to get only incoming traffic which is more like a real (non-pcap) dpdk device. >>>> >>>> for example: >>>> when using existing *rx_iface* >>>> l2fwd -c 3 -n1 --no-huge >>>> --vdev=eth_pcap0,rx_iface=eth1,tx_iface=eth1 >>>> --vdev=eth_pcap1,rx_iface=dummy0,tx_iface=dummy0 -- -p 3 -T 1 >>>> sending only 1 single packet into eth1 will end in an infinite loop >>>> - >>> >>> If you are using same interface for both Rx & Tx, why not using "iface=xxx" >>> argument, can you please test with following: >>> >>> l2fwd -c 3 -n1 --no-huge --vdev=eth_pcap0,iface=eth1 >>> --vdev=eth_pcap1,iface=dummy0 -- -p 3 -T 1 >>> >>> >>> I can't reproduce the issue with above command. >>> >>> Thanks, >>> ferruh >>> >> >